Skip to main content

Full text of "Metamagical Themas"

See other formats


Questing for the Essence 
| of Mind and Pattern 


N Author of the Pulitzer Prize winning 

Godel, Escher, Bach 

An Interlocked Collection of 
Literary, Scientific and Artistic Studies 

Notes on the Cover 

A Spontaneous Essay on Whirly Art and Creativity 

The drawing on the cover is a somewhat atypical example of a non- 
representational form of art I devised and developed over a period of years quite a, 
long time ago, and which my sister Laura once rather light-heartedly dubbed "Whirly 
Art". The name stuck, for better or for worse. Generally speaking,. I did Whirly Art 
on long thin strips of paper (available in rolls for adding machines) rather than on 
sheets of standard format. A typical piece of Whirly Art is five or six inches high and 
five or six feet long. Many are ten feet long, however, and some are as much as fifteen 
or even twenty feet in length. The one-dimensionality of Whirly Art was deliberate, of 
course: I was inspired by music and drew many visual fugues and canons. The time 
dimension was replaced by the long space dimension. I used the narrow width of the 
paper to represent something like pitch (although there was no strict mapping in any 
sense). A "voice" would be a single line tracing out some complex shape as it 
progressed in "time" along the paper. Several such voices could interact, and notions 
of what made "good" or "bad" visual harmony' or counterpoint soon became intuitive 
to me. 

The curvilinear motions constituting a single voice came from a blend of 
alphabets. At that time (the mid-60's), I was absolutely fascinated by the many- 
writing systems found in and around India, exemplified by Tamil Sinhalese, 
Kanarese, Telugu, Bengali, Hindi, Burmese, Thai, and many others. I studied some of 
them quite carefully, and even invented one of my own, based on the principles that 
most Indian scripts follow. It was natural that the motions my hand and mind were 
getting accustomed to would find their way into my visual fuguing. Thus was born 
Whirly Art. 

Over the next, several years, I did literally thousands of pieces of Whirly Art. 
Each one was totally improvised-in pen-so that there was no going back, a mistake 
was a mistake! Alternatively, a mistake could be interpreted as a very daring move 
from which it would be difficult, but not impossible to recover gracefully. In other 
words, what seemed at first to be a disastrous mistake could turn into a joyful 
challenge! (I am sure that jazz improvisers will know exactly what I am talking 
about.) Sometimes, of course, I would 

Notes on the Cover 


Fail, but other times I would succeed (at least by my own standards, since I was both 
performer and "listener"). 

Whirly Art became a (very) highly idiosyncratic language, with its own 
esthetic and traditions. However, traditions are made to be broken, and as soon as I 
spotted a tradition, I began experimenting around, violating it in various ways to see 
how I might move beyond my current state-how I might "jump out of the system". 
Style succeeded style, and I found myself paralleling the development of music. I 
moved from baroque Whirly Art (fugues, canons, and so forth) to "classical" Whirly 
Art, thence to "romantic" Whirly Art. After several years (it was now the late 60's), I 
reached the twentieth century, and found myself spiritually imitating such 
favorite composers of mine as Prokofiev and Poulenc. I did not copy any pieces 
specifically, but simply felt a kinship to those composers' style. Whirly Art iS not 
translated music, but metaphorical music. 

It is natural to wonder if I managed to jump beyond the twentieth century and 
make visual 21st-century music. That would have been quite a feat! Actually, in the 
early 70's I found that I simply was slowing down in production of Whirly Art. It had 
taken me seven years to recapitulate the history of Western music! At that point, I 
seemed to run out of creative juices. Of course, I could still make new Whirly Art 
then, as I can now -but I simply was less often inclined to do so. And today, I hardly 
ever do any Whirly Art, although the way that I draw curvy lines and letterforms 
bears indelible marks of Whirly Art. 

The piece on the cover, then, is atypical because it was done on an ordinary 
sheet of paper and has no direction of temporal flow. Also, the really is no concept of 
counterpoint-in it. Still, it has something of a Whirly Art spirit. There are also seven 
Whirly alphabets, in the book, one on each of the title pages of the seven sections. 
They ;are all somewhat atypical as, well, but for slightly different reasons. Each was 
done on an ordinary sheet of paper but there is still always a clear flow, namely from 
V A' to, V Z': The real atypicality is the fact that genuine letters from a genuine alphabet 
are being used. I usually eschewed real letters, preferring-to use shapes inspired by 
letters- shapes more complex and, well, "whiny" than most letters, even more so than 
Tamil or Sinhalese letters, which are pretty, darn whiny. 

Whirly Art is, I feel, quite possibly the most creative thing I have ever done. 
That, of course, is my opinion. Other people may disagree. It is a fairly strange and 
idiosyncratic form of art, however, and cannot be instantly understood. It has its own 
logic, related to the logics of musical harmony 

and counterpoint, Indian alphabets, gestalt perception, and who knows what else. I've 
kept it all quite literally in my closet for years-rolled up and piled into many paper 
bags and cardboard boxes. Because of its physical awkwardness, it is hard to show to 
people. But Whirly Art itself, and the experience of doing it, is an absolutely central 
fact about my way of looking at art, music, and creativity. Practically every time I 
write about creativity, 

Notes on the Cover 


some part of my mind is re-enacting Whirly Art experiences In other words a lot of 
my convictions about creativity come from self-observation rather than from scholarly 
study of the manuscripts or sketches of various composers or painters or writers or 
scientists. Of course, I have done some of that type of scholarship too, because I am 
fascinated by creativity in general-but I feel that to some extent "you don't really 
understand it unless you've done it", and so I rely a great deal on that personal 
experience. I feel that way that "I know what I'm talking about." 

However, I would make a slightly stronger statement: Any two creative things 
that I've done seem to be, at some deep level, isomorphic. It's as if Whirly Art and 
mathematical discoveries and strange dialogues and little pieces of piano music and so 
on are all coming from a very similar core, and the same mechanisms are being 
exploited over and over again, only dressed up differently. Of course it's not all of the 
same quality: my real music-is not as good as my visual music, for instance. But 
because I have this conviction that the core creativity behind all these things is really 
the same (at least in my own case), I am trying like mad to get at, and to lay bare, that 
core. For. that reason I pursue ever-simpler domains in which I can feel myself doing 
"the same thing". In Chapter 24 of this book-in some sense the most creative Chapter, 
not surprisingly-I write about three of those domains, the Seek-Whence domain, the 
Copycat domain, and the Letter Spirit domain. 

It is the Letter Spirit domain— "gridfonts" in particular-that is currently my 
most intense obsession. That domain came out of a lifelong fascination with our 
alphabet and other writing systems. I simply boiled away what I considered to be less 
interesting aspects of letterforms-I boiled and boiled until I was left with what might 
be called the "conceptual skeletons" of letterforms. That is what gridfonts are about. 
People who have not shared my alphabetic fascination often underestimate at first the 
potential range of gridfonts, thinking that there might be a few and that's all. That is 
dead wrong Thee are a huge number of them, and their variety is astounding. 

As I look at the gridfonts I produce-and as I feel myself producing a gridfont I 
feel that what I am doing is just Whirly Art all over again, in a new and ridiculously 
constrained way. The same mechanisms of 'shape transformation, the same quest for 
grace and harmony, the same intuitions bout what works and what doesn't, the same 
desire to "jump out of the system"-all this is truly the same. Doing gridfonts is 
therefore very exciting me and provides a new proving ground for my speculations. 
The one advantage that gridfonts have over Whirly Art is that they are preposterously 
constrained. This means that the possibilities for choice can be watched much more 
easily. It does not mean that a choice can be explained easily, but at least it can be 
watched. In a way, gridfonts are allowing me re-experience the Whirly- Art period of 
my life, but with the advantage several years' thinking about artificial intelligence and 
how I would like t try to make it come about. In other words, I can now hope that 
perhaps I 

Notes on the Cover 


Can get a Handle-a bit of one, anyway-on w at is going on in creativity by means of 
computer modeling of it. 

Since I feel that in a fundamental sense, Whirly-Art creativity is no deeper, 
than gridfont creativity, the study of gridfont creation-more specifically, the computer 
modeling of gridfont creation-could reveal some things that ' I have sought for a long 
time. Therefore the next few years will be an important time for me-a time to see if I 
can really get at the essence, via modeling, of what my mind is doing when I create 
something that to me is , excitingly novel. 

This book, as it says on its cover and in the Introduction, deals with Mind 
and Pattern. To me, boiling things down to their conceptual skeletons is the royal road 
to truth (to mix metaphors rather horribly). I think that a lot of truth about Mind and 
Pattern lies waiting to be extracted in the tiny domains that I have carved out very 
painstakingly over the past seven years or so in Indiana. I urge you to keep these 
kinds of things in mind as you read this book. This "confession", coming as it does in 
a most unexpected place, is a very spontaneous one and probably captures as well as 
anything could the reason that my research is focused as it is, and the reason that I 
wrote this book. 

Notes on the Cover 



This book takes its title from the column I wrote in Scientific American 
between January 1981 and July 1983. In that two-and-a-half-year span, I produced 25 
columns on quite a variety of topics. My choice of title deliberately left the focus of 
the column somewhat hazy, which was fine with me as well, as with Scientific 
American. When Dennis Flanagan, the magazine's editor, wrote to me in mid- 1980 to 
offer me the chance to write a column in that distinguished publication, he made it 
clear that what was desired was a bridge between the scientific and the literary 
viewpoints, something he pointed out Martin Gardner had always done, despite the 
ostensibly limiting title of his column, "Mathematical Games". Here is how Dennis 
put it in his letter: - 

I might emphasize the flexible nature of the department we have been calling 
"Mathematical Games". As you know, under this, title, Martin has written a 
great deal that is neither mathematical nor game-like. Basically, "Mathematical 
Games", has been Martin's' column to talk about- anything under the sun that 
interests him. Indeed, in our view, the main import of the column has been to 
demonstrate that a modern intellectual can have a range of interests that are 
confined by such words as "scientific" or "literary". We hope that whoever 
succeeds Martin will feel free to cover his own broad range of interests, which 
re unlikely to be identical to Martin's. 

What a refreshingly open attitude! So I was being asked to be the successor to' Martin 
Gardner-but not necessarily to continue the same column, Rather than filling the same 
role as Martin had, I would merely occupy the same physical spot in the magazine. 
I had been offered a unique opportunity to say pretty much anything I wanted to say 
to a vast, ready-made audience, in a prestigious context. Carte blanche, in short. What 
more could I ask? Even so, I had to deliberate long' and hard about whether to take it, 
because I did not consider myself primarily a writer, but a thinker and researcher, and 
time taken in writing would surely be time taken away from research. The 
conservative pathway, following what was known, would have been to say no, and 
just do research, The adventurous pathway, exploring the new opportunity and 



some research, was tempting. Both were risky, since I knew that, either way I would 
inevitably wonder, "How would things have gone had I decided the other way?" 
Moreover, I had no idea how long I might write my column, since that was not 
stipulated. It, could go on for many years-or I could, decide it was too much for me, 
and quit after a year. 

In a way, I knew from the beginning that I would take the offer, I guess 
because I am basically more adventurous than I am conservative. But it was a little 
like purchasing new clothes: no matter how much you like them, you still want to see 
how you look in them before you buy them, so you put them on and parade around the 
store, looking at yourself in the mirror and asking whoever is with you what they 
think of it. So I talked it over with numerous people, and finally decided as I had 
expected: to take the offer. 

* * * 

For the first year, Martin Gardner and I alternated columns. I have to, admit 
that even though I was utterly free to "be myself", I felt somewhat, tradition-bound. 
True, I had metamorphosed his title into my own title (see Chapter 1 for an 
explanation), but I was aware that readers of Martin's column would, naturally 
enough, be expecting a similar type'l of fare. It took a little while for me to test the 
waters, getting reader reactions and seeing if the magazine was satisfied with my 
performance, a performance very different in style from Martin's, after all. Needless 
to say, some readers were "disappointed that I was not a clone of Martin Gardner, but 
others complimented me on how I had managed to keep the same level of quality 
while changing the style and content greatly. It was hard, knowing, that people were 
constantly comparing me with someone very different from me. It was particularly, 
hard when people who should have known better really confused my role with 
Martin's. For instance; as late as June 1983, at a conference on artificial intelligence, a 
colleague who-spotted me came up to me and eagerly told me a math puzzle he'd just 
discovered and solved, hoping I would put it in my "Mathematical Games" column,. 
How often did I have to tell people that my column was not called "Mathematical 

I doubt that anyone loved Martin Gardner's column more than I did, or owed 
more to it. Yet I did not want my identity confused with someone else's. So writing 
this column and being in the shadow of, someone superlative was not always easy. 
But I think I hit my stride and comfortable with, my new role after a few months, 

In 1982, -Martin retired, leaving the space entirely to me. It was -a chore to be 
sure, to get a column out each month, but it was also a lot of fun. In any case, what 
mattered to me the most was to do my best to make the column interesting and diverse 
and highly provocative. I took Dennis' offer quite literally, not restricting myself to 
purely scientific topics, but venturing into musical and literary topics as well. 

After a year and a half, I was beginning to wonder how long I could sustain 



it without seriously jeopardizing my research. I decided to divide up my long list of 
prospective topics into categories: columns I would love to do, columns I would 
simply enjoy doing, and columns I could write with interest but no real passion. I 
found I had about a year's worth left in the first category, maybe another year's worth 
left in the second, and then a large number in the third. It seemed, then, that in another 
year or so it would be a good time to reassess the whole issue of writing the column. 
As it turned out, my thinking was quite consonant with evolving desires at the 
editorial level of the magazine. They were most interested in launching a new column 
to be devoted to the recreational aspects of computing, and our plans dovetailed well. 
My column could be phased out just as the new one was being phased in. And that is 
the way it came to pass, with two surprise columns by Martin Gardner filling the gap. 
My farewell to readers came as a postscript to Martin's final column, in September 

Thus my era as a columnist came to an end. As I look back on it, I feel it lasted 
just about the right length of time: long enough to let me get a significant amount 
said, but not so long that it became a real drag on me. This way, at least, I got to 
explore that avenue that was so tempting, and yet it didn't radically alter the course of 
my life. So in sum, I am quite pleased with my stint at Scientific American. I am 
proud to have been associated with that venerable institution, and to have filled that 
unique slot for a ,time, especially coming right on the heels of someone of such high 

* * * 

The diversity of my columns is worth discussing for a moment. On the 
surface, they seem to wander all over the intellectual map=-from sexism to music to 
art to nonsense, from game theory to artificial intelligence to molecular biology" to 
the Cube, and more. But there is, I believe, a deep underlying, unity to my columns. I 
felt that gradually, as I wrote more and 

o of-them, regular readers would start to see the links between disparate ones, 
so that after a while, the coherence of the web would be quite clear. My image of this' 
was always geometric. I envisioned my intellectual "home territory" as a rather large 
region in some conceptual space, a region that most people do not see as a connected 
unit. Each new column was in a way a new "random dot" in that conceptual space, 
and as dots began peppering the space more fully over the months, the shape of my 
territory would begin to emerge more clearly. Eventually, I hoped, there would 
emerge' a clear 'region associated with the name "Metamagical Themas". 

Of course I wonder if my 25 1/2 columns are sufficient to convey the 
connectedness of my little patch of intellectual territory, or if, on the .contrary, they 
would leave a question mark in the mind of someone who read them all in succession 
without any other explanation. Would it simply seem like a patchwork quilt, a curious 
potpourri? Truth to tell, I suspect that 5 columns are not quite enough, on their own. 
Probably the dots are' too 



sparsely distributed to suggest the rich web of potential cross-connections there. For 
that reason, in drawing all my columns together to form a book; decided to try to flesh 
out that space by including a few other recent 4rritings of mine that might help to fill 
some of the more blatant gaps. There are seven such pieces included (indicated by 
asterisks in the table of contents). I believe they help to unify this book. 

If someone were to ask me, "What is your new book about, in a word?", 1 
"Would probably mutter something like "Mind and Pattern". That, in fact, was one 
title I considered for the column, way back when. Certainly it tells what most 
intrigues me, but it doesn't convey it quite vividly or passionately enough. Yes, I am a 
relentless quester after the chief patterns of the universe— central organizing 
principles, clean and powerful ways to categorize what is out there". Because of this, I 
have always been pulled to mathematics. Indeed, even though I dropped the idea of 
being a professional mathematician many years ago, whenever I go into a new 
bookstore, I always e a beeline for the math section (if there is one). The reason is that 
I feel that mathematics, more than any other discipline, studies the fundamental, 
pervasive patterns of the universe. However, as I have gotten older, I have come to 
see that there are inner mental patterns underlying our ability to conceive of 
mathematical ideas, universal patterns in human minds that make them receptive not 
only to the patterns of mathematics but *'also to abstract regularities of all sorts in the 
world. Gradually, over the years my focus of interest has shifted to those more 
subliminal patterns of memory and associations, and away from the more formal, 
mathematical ones. Thus my interest has turned ever more to Mind, the principal 
apprehender of pattern, as well as the principal producer of certain kinds of pattern. . 

To me, the deepest and most mysterious of all patterns is music, a product of 
the mind that the mind has not come close to fathoming yet. In some sense all my 
research is aimed at finding patterns that will help us to understand the mysteries of 
musical and visual beauty. I could be bolder and say, V "I seek to discover what 
musical and visual beauty really are." However, I don't believe that those mysteries 
will ever be truly cleared up, nor do I wish them to be. I would like to understand 
things better, but I don't want to understand them perfectly. I don't wish the fruits of 
my research to include a mathematical formula for Bach's or Chopin's music. Not that 
I think it possible. In fact, I think the very idea is nonsense. But even though I find the 
prospect repugnant, I am greatly attracted by the effort to do as do as much as 
possible in that direction. Indeed, how could anyone hope to approach the concept of 
beauty without deeply studying the nature of formal patterns and their organizations 
and relationships to Mind? How can anyone fascinated by beauty fail to be intrigued 
by the notion of a "magical formula" behind it all, chimerical though the idea 
certainly is? And in this day and age, how can anyone fascinated by creativity and 
beauty fail to "see in computers the ultimate tool for exploring their essence? Such 
ideas are 



thee inner fire that propels my research and my writings, and they are the core of this 

There is another aspect of my inner fire that is brought out in the writings here 
collected, particularly toward the end, but it pops up throughout. That is a concern 
with the global fate of humanity and the role of the individual in helping determine it. 
I have long been an activist, someone who periodically gets fired up by some cause 
and ardently works for it, exhorting everyone else I come across to get involved as 
well. I am a fierce believer in the value of passion and commitment to social causes, 
someone baffled and troubled by apathy. One of my personal mottos is: "Apathy on 
the individual level translates into insanity at the mass level", a saying nowhere better 
exemplified than by today's insane dedication of so many human and natural 
resources to the building up of unimaginably catastrophic arsenals, all while 
mountains of humanity are starving and suffering in horrible ways. Everyone knows 
this, and yet the situation remains this way, getting worse day by day. We do live in a 
ridiculous world, and I would not wish to talk about the world without indicating my 
confusion and sadness, but also my vision and hope, concerning our shared human 

* * * 

Inevitably, people will compare this book with my earlier books, Godel 
Escher, Bach: an Eternal Golden Braid and The Mind's I, coedited with my friend 
Daniel Dennett. Let me try for a moment to anticipate them. 

GEB was a; unique sort of book-the detailed working-out of a single potent 
spark. It was a kind of explosion in my mind triggered by my love with mathematical 
logic after a long absence. It was the first had tried to write anything long, and I 
pulled out all the stops. In particular, I made a number of experiments with style, 
especially in writing dialogues .based on musical forms such as fugues and canons. In 
essence," GEB was one extended flash having to do with Kurt Godel' s famous 
incompleteness theorem, the human brain, and the mystery -'consciousness. It is well 
described on its cover as "a metaphorical fugue on minds and machines". 

The Mind's 1 is very different from Godel, Escher, Bach. It is an extensively 
annotated anthology rather than the work of a single person. It is far more like a 
monograph than GEB is, in that it has a unique goal: to probe the mysteries of 'matter 
and consciousness in as vivid and jolting a -way as possible, through stories that 
anyone can read and understand, followed by careful commentaries by Dan Dennett 
and myself. Its subtitle is "Fantasies and Reflections on Self and Soul". 

One thing that GEB and The Mind's I have in common is their internal 
structure of alternation. GEB alternates between dialogues and chapters; while The 
Mind's I alternates between fantasies and reflections. I guess I like 



This contrapuntal mode, because it crops up again in the present volume. Here, I 
alternate between articles and postscripts. 

If GEB is an elaborate fugue on one very complex theme, and MI is a 
collection of-many variations on a theme, then perhaps MT is a fantasia employing 
several themes. If it were not for the postscripts, I would say that it was disjointed. 
However, I have made a great effort to tie together the diverse themes-Themas-by 
writing extensive commentaries that cast the ideas of each article in the light of other 
articles in the book. Sometimes the postscripts approach the length of the piece they 
are "post", and in one case (Chapter 24) the postscript is quite a bit longer than its 

'The reason for that particularly long postscript is that I decided to use it to 
describe some aspects of my own current research in artificial intelligence. There are 
other places as well in the book where I touch on my research ideas, though I never go 
into technical details. My main concern is to give a clear idea of certain central riddles 
about how minds work, diddles that I have run across over and over again in different 
guises. The questions I raise are difficult but I find them as beguiling as mathematical 
ones. In any case, this book will give readers a better understanding of how. my 
research and the rest of my ideas fit together. 

* * * 

One aspect of this book that, I must admit, sometimes makes me uneasy the 
striking disparity in the seriousness of its different topics. How can both Rubik's Cube 
and nuclear Armageddon be discussed at equal length one book by one author? Partly 
the answer is that life itself is a mixture things of many sorts, little and big, light and 
serious, frivolous and formidable, and Metamagical Themas reflects that complexity. 
Life is not worth living if one can never afford to be delighted or have fun. There is 
another way of explaining this huge gulf. Elegant mathematical structures can be as 
central to a serious modern worldview as are social concerns, and can deeply 
influence one's ways of thinking about anything —even such somber and colossal 
things as total nuclear obliteration. In er to comprehend that which is 
incomprehensible because it is too huge too complex, one needs simpler models. 
Often, mathematics can provide right starting point, which is why beautiful 
mathematical concepts o pervasive in explanations of the phenomena of nature on the 
microlevel. They-are now proving to be of great help also on a larger scale, as Robert 
Axelrod's lovely work-on the Prisoner's Dilemma so impeccably demonstrates (see 
Chapter 29). 

The Prisoner's Dilemma is poised about halfway between the Cube and 
Armageddon, in terms of complexity, abstraction, size, and seriousness. I submit that 
abstractions of this sort are direly needed in our times, because many people-even 
remarkably smart people-turn off when faced with issues that are too big. We need to 
make such issues graspable. To make 



them graspable and fascinating as well, we need to entice people with the beauties of 
clarity, simplicity, precision, elegance, balance, symmetry, and so on. 

Those artistic qualities, so central to good science as well as to good insights 
about life, are the things that I have tried to explore and even to celebrate in 
Metamagical Themas. (It is not for nothing that the word "magic" appears inside the 
title!) I hope that Metamagical Themas will help people to bring more clarity, 
precision, and elegance to their thinking about situations large and small. I also hope 
that it will inspire people to dedicate more of their energies to global problems in this 
lunatic but lovable world, because we live in a time of unprecedented urgency. If we 
do not care enough now, future generations may not exist to thank us for their 
existence and for our caring. 



Section I: 

Snags and Snarls 

On Self-Referential Sentences 


Section I: 

Snags and Snarls 

The title of this section conveys the image of problematical twistiness, The twists 
dealt with here are those whereby a system (sentence, picture, language, organism, 
society, government, mathematical structure, computer program, etc.) twists back on 
itself and closes a loop. A very general name for this is reflexivity. When realized in 
different ways, this abstraction becomes a concrete phenomenon. Examples are: self- 
reference, self-description, self-documentation, self-contradiction, self-questioning, self- 
response, self-justification, self-refutation, self-parody, self-doubt, self-definition, self- 
creation, self-replication, self-modification, self-amendment, self-limitation, self- 
extension, self-application, self-scheduling,-self-watching, and on and on. In the 
following four chapters, -these strange phenomena are illustrated in sentences and stories 
that talk about others, ideas that propagate themselves from mind to mind, machines that 
replicate themselves, and games that modify their own rules. The variety of these loopy 
tangles is quite remarkable, and the subject is far, far from being exhausted. Furthermore, 
although their connection with paradox may make reflexive systems seem no more than 
fin al playthings, study of them is of great importance in understanding many 
mathematical and scientific developments of this century, and is becoming ever more 
central to theories of intelligence and consciousness, whether natural or artificial 
Reflexivity will therefore make many return appearances in this book. 

On Self-Referential Sentences 


On Self-Referential Sentences 

January, 1981 

I never expected to be writing a column for Scientific American. I remember once, years 
ago, wishing I were in Martin Gardner's shoes. It seemed exciting to be able to plunge 
into almost any topic one liked and to say amusing and instructive things about it to a 
large, well-educated, and receptive audience. The notion of doing such a thing seemed 
ideal, even dreamlike. Over the next several years, by a series of total coincidences 
(which turned out to be not so total), I met one after another of Martin's friends. First it 
was Ray Hyman, a psychologist who studies deception. He introduced me to the 
magician Jerry Andrus. Then I met the statistician and magician Persi Diaconis and the 
computer wizard Bill Gosper. Then came Scott Kim, and soon afterward, the 
mathematician Benoit Mandelbrot. All of a sudden, the world seemed to be orbiting 
Martin Gardner. He was at the hub of a magic circle, people with exciting, novel, often 
offbeat ideas, people with many-dimensional imaginations. Sometimes I felt overawed by 
the whole remarkable bunch. 

One day, five or so years ago, I had the pleasure of spending several hours with 
Martin in his house, discussing many topics, mathematical and otherwise. It was an 
enlightening experience for me, and it gave me a new view into the mind of someone 
who had contributed so much to my own mathematical education. Perhaps the most 
striking thing about Martin to me was his natural simplicity. I had been told that he is an 
adroit magician. This I found hard to believe, because one does not usually imagine 
someone so straightforward pulling the wool over anyone's eyes. However, I did not see 
him do any magic tricks. I simply saw his vast knowledge and love of ideas spread out 
before me, without the slightest trace of pride or pretense. The Gardners-Martin and his 
wife Charlotte-entertained me for the day. We ate lunch in the kitchen of their cozy three- 
story house. It pleased me somehow to see that there was practically no trace of 
mathematics or games or tricks in their simple but charming living room. 
After lunch-sandwiches that Martin and I made while standing by the kitchen sink-we 
climbed the two flights of stairs to Martin's hideaway. With his old typewriter and all 
kinds of curious jottings in an ancient filing cabinet 

On Self-Referential Sentences 


and his legendary library of three-by-five cards, he reminded me of an old-time 
journalist, not of the center of a constellation of mathematical eccentrics and game 
addicts, to say nothing of magicians, anti-occultists, and of course the thousands of 
readers of his column. 

Occasionally we were interrupted by the tinkling of a bell attached to a string that 
led down the stairs to the kitchen, where Charlotte could pull it to get his attention. A 
couple of phone calls came, one from the logician and magician Raymond Smullyan, 
someone whose name I had known for a long time, but who I had no idea belonged to 
this charmed circle. Smullyan was calling to chat about a book he was writing on Taoism, 
of all things! For a logician to be writing about what seemed to me to be the most anti- 
logical of human activities sounded wonderfully paradoxical. (In fact, his book The Tao 
Is Silent is delightful and remarkable.) All in all, it was a most enjoyable day. 

Martin's act will be a hard one to follow. But I will not be trying to be another 
Martin Gardner. I have my own interests, and they are different from Martin's, although 
we have much in common. To express my debt to Martin and to symbolize the heritage 
of his column, I have kept his title "Mathematical Games" in the form of an anagram: 
"Metamagical Themas". 

What does "metamagical" mean? To me, it means "going one level beyond 
magic". There is an ambiguity here: on the one hand, the word might mean 
"ultramagical"-magic of a higher order-yet on the other hand, the magical thing about 
magic is that what lies behind it is always nonmagical. That's metamagic for you! It 
reflects the familiar but powerful adage "Truth is stranger than fiction." So my 
"Metamagical Themas" will, in Gardnerian fashion, attempt to show that magic often 
lurks where few suspect it, and, by the opposite token, that magic seldom lurks where 
many suspect it. 

* * * 

In his July, 1979 column, Martin wrote a very warm review of my book Godel, 
Escher, Bach: an Eternal Golden Braid. He began the review with a short quotation from 
my book. If I had been asked to guess what single sentence he would quote, I would 
never have been able to predict his choice. He chose the sentence "This sentences no 
verb." It is a catchy sentence, I admit, but something about seeing it again bothered me. I 
remembered how I had written it one day a few years earlier, attempting to come up with 
a new variation on an old theme, but even at the time it had not seemed as striking as I 
had hoped it would. After seeing it chosen as the symbol of my book, I felt challenged. I 
said to myself that surely there must be much cleverer types of self-referential sentence. 
And so one day I wrote down quite a pile of self -referential sentences and showed them 
to friends, which began a mild craze among a small group of us. In this column, I will 
present a selection of what I consider to be the cream of that crop. 

On Self-Referential Sentences 


Before going further, I should explain the term "self-reference". Self-reference is 
ubiquitous. It happens every time anyone says "I" or., me" or "word" or "speak" or 
"mouth". It happens every time a newspaper prints a story about reporters, every time 
someone writes a book about writing, designs a book about book design, makes a movie 
about movies, or writes an article about self-reference. Many systems have the capability 
to represent or refer to themselves somehow, to designate themselves (or elements of 
themselves) within the system of their own symbolism. Whenever this happens, it is an 
instance of self -reference. 

Self-reference is often erroneously taken to be synonymous with paradox. This 
notion probably stems from the most famous example of a self-referential sentence, the 
Epimenides paradox. Epimenides the Cretan said, "All Cretans are liars." I suppose no 
one today knows whether he said it in ignorance of its self-undermining quality or for 
that very reason. In any case, two of its relatives, the sentences "I am lying" and "This 
sentence is false", have come to be known as the Epimenides paradox or the liar paradox. 
Both sentences are absolutely self-destructive little gems and have given self-reference a 
bad name down through the centuries. When people speak of the evils of self-reference, 
they are certainly overlooking the fact that not every use of the pronoun "I" leads to 

* * * 

Let us use the Epimenides paradox as our jumping-off point into this fascinating 
land. There are many variations on the theme of a sentence that somehow undermines 
itself. Consider these two: 

This sentence claims to be an Epimenides paradox, but it is lying. 
This sentence contradicts itself-or rather-well, no, actually it doesn't! 

What should you do when told, "Disobey this command"? In the following 
sentence, the Epimenides quality jumps out only after a moment of thought: "This 
sentence contains exactly threee erors." There is a delightful backlash effect here. 

Kurt Godel's famous Incompleteness Theorem in metamathematics can be 
thought of as arising from his attempt to replicate as closely as possible the liar paradox 
in purely mathematical terms. With marvelous ingenuity, he was able to show that in any 
mathematically powerful axiomatic system S it is possible to express a close cousin to the 
liar paradox, namely, "This formula is unprovable within axiomatic system S. " 

In actuality, the Godel construction yields a mathematical formula, not an English 
sentence; I have translated the formula back into English to show what he concocted. 
However, astute readers may have noticed that, strictly speaking, the phrase "this 
formula" has no referent, since when a. formula 

On Self-Referential Sentences 


is translated into an English sentence, that sentence is no longer a formula! 

If one pursues this idea, one finds that it leads into a vast space. Hence the 
following brief digression on the preservation of self-reference across language 
boundaries. How should one translate the French sentence Cette phrase en francais est 
difficile a traduire en anglais ? Even if you do not know French, you will see the problem 
by reading a literal translation: "This sentence in French is difficult to translate into 
English." The problem is: To what does the subject ("This sentence in French") refer? If 
it refers to the sentence it is part of (which is not in French), then the subject is self- 
contradictory, making the sentence false (whereas the French original was true and 
harmless); but if it refers to the French sentence, then the meaning of "this" is strained. 
Either way, something disquieting has happened, and I should point out that it would be 
just as disquieting, although in a different way, to translate it as: "This sentence in 
English is difficult to translate into French." Surely you have seen Hollywood movies set 
in France, in which all the dialogue, except for an occasional Bonjour or similar phase, is 
in English. What happens when Cardinal Richelieu wants to congratulate the German 
baron for his excellent command of French? I suppose the most elegant solution is -for 
him to say, "You have an excellent command of our language, mon cher baron ", and 
leave it at that. 

But let us undigress and return to the Godelian formula and focus on its meaning. 
Notice that the concept of falsity (in the liar paradox) has been replaced by the more 
rigorously understood concept of provability. The logician Alfred Tarski pointed out that 
it is in principle impossible to translate the liar paradox exactly into any rigorous 
mathematical language, because if it were possible, mathematics would contain a genuine 
paradox -a statement both true and false-and would come tumbling down. 

Godel's statement, on the other hand, is not paradoxical, though it constitutes a 
hair-raisingly close approach to paradox. It turns out to be true, and for this reason, it is 
unprovable in the given axiomatic system. The revelation of Godel's work is that in any 
mathematically powerful and consistent axiomatic system, an endless series of true but 
unprovable formulas can be constructed by the technique of self-reference, revealing that 
somehow the full power of human mathematical reasoning eludes capture in the cage of 

In a discussion of Godel's proof, the philosopher Willard Van Orman Quine 
invented the following way of explaining how self-reference could be achieved in the 
rather sparse formal language Godel was employing. Quine's construction yields a new 
way of expressing the liar paradox. It is this: 

On Self-Referential Sentences 


"yields falsehood, when appended to its quotation." yields falsehood, when 
appended to its quotation. 

This sentence describes a way of constructing a certain typographical entity -namely, a 
phrase appended to a copy of itself in quotes. When you carry out the construction, 
however, you see that the end product is the sentence itself-or a perfect copy of it. (There 
is a resemblance here to the way self-replication is carried out in the living cell.) The 
sentence asserts the falsity of the constructed typographical entity, namely itself (or an 
indistinguishable copy of itself). Thus we have a less compact but more explicit version 
of the Epimenides paradox. 

It seems that all paradoxes involve, in one way or another, self -reference, whether 
it is achieved directly or indirectly. And since the credit for the discovery-or creation-of 
self-reference goes to Epimenides the Cretan, we might say: "Behind every successful 
paradox there lies a Cretan." 

On the basis of Quine's clever construction we can create a self-referential 

What is it like to be asked, 
"What is it like to be asked, self-embedded in quotes after its comma?" 
self-embedded in quotes after its comma? 

Here again, you are invited to construct a typographical entity that turns out, when the 
appropriate operations have been performed, to be identical with the set of instructions. 
This self-referential question suggests the following puzzle: What is a question that can 
serve as its own answer? Readers might enjoy looking for various solutions to it. 

* * * 

When a word is used to refer to something, it is said to be being used. When a word is 
quoted, though, so that one is examining it for its surface aspects (typographical, 
phonetic, etc.), it is said to be being mentioned. The following sentences are based on this 
famous use-mention distinction: 

You can't have your use and mention it too. 

You can't have "your cake" and spell it "too". 

"Playing with the use-mention distinction" isn't "everything in life, you know". 
In order to make sense of "this sentence", you will have to ignore the quotes in "it". 

On Self-Referential Sentences 


T'his is a sentence with "onions", "lettuce", "tomato", and "a side of fries to go". 

This is a hamburger with vowels, consonants, commas, and a period at the end. 

The last two are humorous flip sides of the same idea. Here are two rather extreme 
examples of self-referential use-mention play: 

Let us make a new convention: that anything enclosed in triple quotes-for example, 
" v No, I have decided to change my mind; when the triple quotes close, just skip 
directly to the period and ignore everything up to it"'-is not even to be read (much 
less paid attention to or obeyed). 

A ceux qui ne comprennent pas l'anglais, la phrase citee ci-dessous ne dit rien: "For 
those who know no French, the French sentence that introduced this quoted 
sentence has no meaning." 

The bilingual example may be more effective if you know only one of the_ two 
languages involved. 

Finally, consider this use-mention anomaly: "i should begin with a capital letter." 
This is a sentence referring to itself by the pronoun "I", a bit mauled, instead of through a 
pointing-phrase such as "this sentence"; such a sentence would seem to be arrogantly 
proclaiming itself to be an animate agent. Another example would be "I am not the 
person who wrote me." Notice how easily we understand this curious nonstandard use of 
"I". It seems quite natural to read the sentence this way, even though in nearly all 
situations we have learned to unconsciously create a mental model of some person-the 
sentence's speaker or writer-to whom we attribute a desire to communicate some idea. 
Here we take the "I" in a new way. How come? What kinds of cues in a sentence make us 
recognize that when the word "I" appears, we are supposed to think not about the author 
of the sentence but about the sentence itself? 

* * * 

Many simplified treatments of Godel's work give as the English translation of his famous 
formula the following: "I am not provable in axiomatic system S. " The self -reference 
that is accomplished with such sly trickery in the formal system is finessed into the 
deceptively simple English word "I", and we can-in fact, we automatically do-take the 
sentence to be talking about itself. Yet it is hard for us to hear the following sentence as 
talking about itself: "I already took the garbage out, honey." 

The ambiguous referring possibilities of the first-person pronoun are a source of many 
interesting self -referential sentences. Consider these: . 

On Self-Referential Sentences 


I am not the subject of this sentence. 

I am jealous of the first word in this sentence. 

Well, how about that-this sentence is about me! 

I am simultaneously writing and being written. 

This raises a whole new set of possibilities. Couldn't "I" stand for the writing instrument 
("I am not a pen"), the language ("I come from Indo-European roots"), the paper ("Cut 
me out, twist me, and glue me to form a Mobius strip, please")? One of the most involved 
possibilities is that "I" stands not for the physical tokens we perceive before us but for 
some more ethereal and intangible essence, perhaps the meaning of the sentence. But 
then, what is meaning? The next examples explore that idea: 

I am the meaning of this sentence. 

I am the thought you are now thinking. 

I am thinking about myself right now. 

I am the set of neural firings taking place in your brain as you read the set of letters 
in this sentence and think about me. 

This inert sentence is my body, but my soul is alive, dancing in the sparks of your 

The philosophical problem of the connections among Platonic ideas, mental activity, 
physiological brain activity, and the external symbols that trigger them is vividly raised 
by these disturbing sentences. 

This issue is highlighted in the self -referential question, "Do you think anybody 
has ever had precisely this thought before?" To answer the question, one would have to 
know whether or not two different brains can ever have precisely the same thought (as 
two different computers can run precisely the same program). An illustration of this 
possibility may be found in Figure 24-2. 1 have often wondered: Can one brain have the 
same thought more than once? Is a thought something Platonic, something whose essence 
exists independently of the brain it is occurring in? If the answer is "Yes, thoughts are 
brain-independent", then the answer to the self-referential question would also be yes. If 
it is not, then no one could ever have had the same thought before-not even the person 
thinking it! 

Certain self-referential sentences involve a curious kind of communication 
between the sentence and its human friends: 

On Self-Referential Sentences 


You are under my control because I am choosing exactly what words you are made 
out of, and in what order. 

No, you are under my control because you will read until you have reached the end 
of me. 

Hey, down there-are you the sentence I am writing, or the sentence I am reading? 

And you up there-are you the person writing me, or the person reading me? 

You and I, alas, can have only one-way communication, for you are a person and , 
a mere sentence. 

As long as you are not reading me, the fourth word of this sentence has no referent. 

The reader of this sentence exists only while reading me. 

Now that is a rather frightening thought! And yet, by its own peculiar logic, it is certainly 

Hey, out there-is that you reading me, or is it someone else? 

Say, haven't you written me somewhere else before? 

Say, haven't I written you somewhere else before? 

The first of the three sentences above addresses its reader; the second addresses its 
author. In the last one, an author addresses a sentence. 

Many sentences include words whose referents are hard to figure out because of their 
ambiguity-possibly accidental, possibly deliberate: 

Thit sentence is not self-referential because "thit" is not a word. 

No language can express every thought unambiguously, least of all this one. 

In the Escher- inspired Figure 1-1, visual and verbal ambiguity are simultaneously 

* * * 

On Self-Referential Sentences 


FIGURE 1-1. Ambiguity: What is being described-the hand, or the writing? [Drawing by 
David Moser, after AL C. Escher. ] 

Let us turn to a most interesting category, namely sentences that deal with the 
languages they are in, once were in, or might have been in: 

When you are not looking at it, this sentence is in Spanish. 

I had to translate this sentence into English because I could not read the original 

The sentence now before your eyes spent a month in Hungarian last year and was 
only recently translated back into English. 

If this sentence were in Chinese, it would say something else. 

.siht ekil ti gnidaer eb d'uoy werbeH ni erew ecnetnes siht fl 

The last two sentences are examples of counterfactual conditionals. Such a 
sentence postulates in its first clause (the antecedent) some contrary-to-fact situation 
(sometimes called a "possible world") and extrapolates in its second clause (the 
consequent) some consequence of it. This type of sentence opens up a rich domain for 
self-reference. Some of the more intriguing self-referential counterfactual conditionals I 
have seen are the following: 

On Self-Referential Sentences 


If this sentence didn't exist, somebody would have invented it. If I had finished this 

If there were no counterfactuals, this sentence would not be paradoxical. 
If wishes were horses, the antecedent of this conditional would be true. 
If this sentence were false, beggars would ride. 
What would this sentence be like if it were not self-referential? 
What would this sentence be like if it were 3? 

Let us ponder the last of these (invented by Scott Kim) for a moment. In a world 
where n actually did have the value 3, you wouldn't ask about how things would be if it 
were 3. Instead, you might muse "if n were 2" or "if n weren't 3". So one's first answer to 
the question might be this: "What would this sentence be like if n weren't 3?". But there 
is a problem. The referent of "this sentence" has now changed identity. So is it fair to say 
that the second sentence is an answer to the first? It is a little like a woman who muses, 
"What would I be doing now if I had had different genes?" The problem is that she would 
not be herself; she would be someone else, perhaps the little boy across the street, playing 
in his sandbox. Personal pronouns like "I" cannot quite keep up with such strange 
hypothetical world-shifts. 

But getting back to Scott Kim's counterfactual, I should point out that there is an 
even more serious problem with it than so far mentioned. Changing the value of n is, to 
put it mildly, a radical change in mathematics, and presumably you cannot change 
mathematics radically without having radically changed the fabric of the universe within 
which we live. So it is quite doubtful that any of the concepts in the sentence would make 
any sense if n were 3 (including the concepts of "n", "3", and so on). 

Here are two more counterfactual conditionals to put in your pipe and smoke: 

If the subjunctive was no longer used in English, this sentence would be 

This sentence would be seven words long if it were six words shorter. 

These two lovely examples, invented by Ann Trail (who is also responsible for quite a 
few others in this column), bring us around to sentences that comment on their own form. 
Such sentences are quite distinct from ones 

On Self-Referential Sentences 


that comment on their own content (such as the liar paradox, or the sentence that says 
"This sentence is not about itself, but about whether it is about itself.")- It is easy to make 
up a sentence that refers to its own form, but it is hard to make up an interesting one. 
Here are a few more quite good ones: 

because I didn't think of a good beginning for it. 

This sentence was in the past tense. 

This sentence has contains two verbs. 

This sentence contains one numeral 2 many. 

a preposition. This sentence ends in 

In the time it takes you to read this sentence, eighty-six letters could have been 
processed by your brain. 

* * * 

David Moser, a composer and writer, is a detector and creator of self -reference and 
frame-breaking of all kinds. He has even written a story in which every sentence is self- 
referential (it is included in Chapter 2). It might seem unlikely that in such a limited 
domain, individual styles could arise and flourish, but David has developed a self- 
referential style quite his own. As a mutual friend (or was it David himself?) wittily 
observed, "If David Moser had thought up this sentence, it would have been funnier." 
Many Moser creations have been used above. Some further Moserian delights are these: 

This is not a complete. Sentence. This either. 

This sentence contains only one nonstandard English flutzpah. 

This gubblick contains many nonsklarkish English flutzpahs, but the overall 
pluggandisp can be glorked from context. 

This sentence has cabbage six words. 

In my opinion, it took quite a bit of flutzpah to just throw in a random word so that there 
would be cabbage six words in the sentence. That idea inspired the following: "This 
sentence has five (5) words." A few more miscellaneous Moserian gems follow: 

On Self-Referential Sentences 


This is to be or actually not two sentences to be, that is the question, combined 
It feels sooo good to have your eyes run over my curves and serifs. 
This sentence is a ! ! ! ! premature punctuator 

Sentences that talk about their own punctuation, as the preceding one does, can be 
quite amusing. Here are two more: 

This sentence, though not interrogative, nevertheless ends in a question mark? 

This sentence has no punctuation semicolon the others do period 

Another ingenious inventor of self-referential sentences is Donald Byrd, several 
of whose sentences have already been used above. Don too has his own very 
characteristic way of playing with self-reference. Two of his sentences follow: 

This hear sentence do'nt know Inglish purty good. 

If you meet this sentence on the board, erase it. 

The latter, via its form, alludes to the Buddhist saying "If you meet the Buddha on the 
road, kill him." 

Allusion through similarity of form is, I have discovered, a marvelously rich vein 
of self-reference, but unfortunately this article is too short to contain a full proof of that 
discovery. I shall explicitly discuss only two examples. The first is "This sentence verbs 
good, like a sentence should." Its primary allusion is to the famous slogan "Winston 
tastes good, like a cigarette should", and its secondary allusion is to, "This sentence no 
verb." The other example involves the following lovely self-referential remark, once 
made by the composer John Cage: "I have nothing to say, and I am saying it." This 
allows the following rather subtle twist to be made: "I have nothing to allude to, and I am 
alluding to it." 

* * * 

Some of the best self-referential sentences are short but sweet, relying for their effect on 
secondary interpretations of idiomatic expressions or well-known catch phrases. Here are 
five of my favorites, which seem to defy other types of categorization: 

Do you read me? 

On Self-Referential Sentences 


This point is well taken. 

You may quote me. 

I am going two-level with you. 

I have been sentenced to death. 

In some of these, even sophisticated non-native speakers would very likely miss what's 
going on. 

Surely no article on self-reference would be complete without including a few 
good examples of self-fulfilling prophecy. Here are a few: 

This prophecy will come true. 

This sentence will end before you can say v Jack Rob 

Surely no article on self-reference would be complete without including a few good 
examples of self-fulfilling prophecy. 

Does this sentence remind you of Agatha Christie? 

That last sentence-one of Ann Trail's-is intriguing. Clearly it has nothing to do with 
Agatha Christie, nor is it in her style, and so the answer ought to be no. Yet I'll be darned 
if I can read it without being reminded of Agatha Christie! (And what is even stranger is 
that I don't know the first thing about Agatha Christie!) 

In closing, I cannot resist the touching plea of the following Byrdian sentence: 

Please, oh please, publish me in your collection of self-referential sentences! 

Post Scriptum. 

This first column of mine triggered a big wave of correspondence, some of which is 
presented in the next chapter. Most of the correspondence was light-hearted, but there 
were a number of serious letters that intrigued me. Here is a repartee that appeared in the 
pages of Scientific American a few months later. 

The kind of structural analysis engaged in, and the resulting questions raised 
by, Douglas Hofstadter in his amusing and intriguing article concerning self- 
referential sentences need not lead inevitably to bafflement of the reader. 

On Self-Referential Sentences 


Help is at hand from the "laggard science" psychology, but only from that 
carefully defined quarter of psychology known as behavior analysis, which was pt 
ogenerated by the famous Harvard psychologist B. F. Skinner almost 50 years ago. 

In examining the implications of linguistic analyses such as Hofstadter's for the 
serious student of verbal behavior, Skinner comments in his book About 
Behaviorism (pages 98-99) as follows: 

Perhaps there is no harm in playing with sentences in this way or in 
analyzing the kinds of transformations which do or do not make sentences 
acceptable to the ordinary reader, but it is still a waste of time, particularly 
when the sentences thus generated could not have been emitted as verbal 
behavior. A classical example is a paradox, such as 'This sentence is false', 
which appears to be true if false and false if true. The important thing to 
consider is that no one could ever have emitted the sentence as verbal 
behavior. A sentence must be in existence before a speaker can say, 'This 
sentence is false', and the response itself will not serve, since it did not exist 
until it was emitted. What the logician or linguist calls a sentence is not 
necessarily verbal behavior in any sense which calls for a behavioral analysis. 

As Skinner pointed out long ago, verbal behavior results from contingencies of 
reinforcement arranged by verbal communities, and it is these contingencies that 
must be analyzed if we are to identify the variables that control verbal behavior. 
Until we grasp the full import of Skinner's position, which goes beyond structure to 
answer why we behave as we do verbally or nonverbally, we shall continue to fall 
back on prescientific formulations that are about as useful in understanding these 
phenomena as Hofstadter's quaint metaphorical speculation: "Such a sentence 
would seem to be arrogantly proclaiming itself to be an animate agent." 

George Brabner 
College of Education 
University of Delaware 

I felt compelled to reply to Professor Brabner's interesting views about these matters, and 
so here is what I wrote: 

I assume that the quote from B. F. Skinner reflects Professor Brabner's own 
sentiments about the likelihood of self-referential utterances. I am always baffled by 
people who doubt the likelihood of self-reference and paradox. Verbal behavior comes 
in many flavors. Humor, particularly self-referential humor, is one of the most 
pervasive flavors of verbal behavior in this century. One has only to watch the 
Muppets or Monty Python on television to see dense and intricate webs of self- 
reference. Even advertisements excel in self -reference. 

In art, Rene Magritte, Pablo Picasso, M. C. Escher, John Cage, and dozens of 
others have played with the level-distinction between that which represents and that 
which is represented. The "artistic behavior" that results includes much self-reference 
and many confusing and sometimes exhilaratingly paradoxical 

On Self-Referential Sentences 


tangles. Would Professor Brabner say that no one could ever have "emitted" such 
works as "artistic behavior"? Where is the borderline? 

Ordinary language, as I pointed out in my column, is filled with self-reference, 
usually a little milder- seeming than the very sharply pointed paradoxes that Professor 
Brabner objects to. "Mouth", "word", and so on are all self-referential. Language is 
inherently filled with the potential of sharp turns on which it may snag itself. 

Many scholarly papers begin with a sentence about "the purpose of this paper". 
Newspapers report on their own activities, conceivably on their own inaccuracies. 
People say, "I'm tired of this conversation." Arguments evolve about arguments, and 
can get confusingly and painfully self-involved. Has Professor Brabner never thought 
of "verbal behavior" in this light? It is likely that in hunting woolly mammoths, no one 
found it extraordinarily likely to shout, "This sentence is false!" However, civilization 
has come a long way since those days, and the primitive purposes of language have by 
now been almost buried under an avalanche of more complex purposes. 

Part of human nature is to be introspective, to probe. Part of our "verbal 
behavior" deliberately, often playfully, explores the boundaries between conceptual 
levels of systems. All of this has its root in the struggle to survive, in the fact that our 
brains have become so flexible that much of their time is spent in dealing with their 
own activities, consciously or unconsciously. It is simply a consequence of 
representational power-as Kurt Godel showed-that systems of increasing complexity 
become increasingly self-referential. 

It is quite possible for people filled with self-doubt to recognize this trait in 
themselves, and to begin to doubt their self-doubt itself. Such psychological dilemmas 
are at the heart of some current theories of therapy. Gregory Bateson's "double bind", 
Victor Frankl's "logotherapy", and Paul Watzlawick's therapeutic ideas are all based 
on level-crossing paradoxes that crop up in real life. Indeed, psychotherapy is itself 
based completely on the idea of a "twisted system of self-a self that wants to reach 
inward and change some presumably wrong part of itself. 

We human beings are the only species to have evolved humor, art, language, 
tangled psychological problems, even an awareness of our own mortality. Self- 
reference-even of the sharp Epimenides type-is connected to profound aspects of life. 
Would Professor Brabner argue that suicide is not conceivable human behavior? 

Finally, just suppose Professors Skinner and Brabner are right, and no one ever 
says exactly "This sentence is false." Would this mean that study of such sentences is a 
waste of time? Still not. Physicists study ideal gases because they represent a 
distillation of the most significant principles of the behavior of real gases. Similarly, 
the Epimenides paradox is an "ideal paradox"-one that cuts crisply to the heart of the 
matter. It has opened up vast domains in logic, pure science, philosophy, and other 
disciplines, and will continue to do so despite the skepticism of behaviorists. 

It is a curious coincidence that the only other reply to my article that was printed in the 
"Letters" column of Scientific American also came from the University of Delaware. Here 
it is: 

On Self-Referential Sentences 


I hope that you do not receive any correspondence concerning Douglas R. 
Hofstadter's article on self-reference. I should like to inform your readers that many 
years of study on this problem have convinced me no conclusion whatsoever can be 
drawn from it that would stand up to a moment's scrutiny. There is no excuse for 
Scientific American to publish letters from those cranks who consider such matters to 
be worthy of even the slightest notice. 

A. J. Dale 
Department of Philosophy 
University of Delaware 

I replied as follows: 

Many years of reading such letters have convinced me that no reply 
whatsoever can be given to them that would stand up to a moment's scrutiny. There is 
no excuse for publishing responses to those cranks who send them. 

After these two exchanges had appeared in print, a number of people remarked to me 
that they'd read the two letters from Delaware that had attacked me, and had enjoyed my 
responses. Two; I guess it wasn't so obvious that Dale's letter was completely tongue-in- 
cheek. In fact, that was its point. 

* * * 

Two other letters stand out sharply in my memory. One was from an individual 
who signed himself (I presume it is a male) as "Mr Flash gFiasco". 

Mr Flash insisted that a sentence cannot say what it shows. The former concerns only its 
content, which is supposedly independent of how it manifests itself in print, while the 
latter is a property exclusively of its form, that is, of the physical sentence only when it is 
in print. This distinction sounds crystal-clear at first, but in reality it is mud-blurry. Here 
is some of what Flash wrote me: 

For a sentence to attempt to say what it shows is to commit an error of logical 
types. It seems to be putting a round peg into a square hole, whereas it is instead 
putting a round peg into something which is not a hole at all, square or otherwise. This 
is a category mismatch, not a paradox. It is like throwing the recipe in with the flour 
and butter and eggs. The source of the equivocation is an illegitimate use of the term 
'this'. 'This' can point to virtually anything, but 'this' cannot point to itself. If you stick 
out your index finger, you can point to virtually anything; and by curling it you can 
even point to the pointing finger; but you cannot point to pointing. Pointing is of a 
higher logical type than the thing which is doing the pointing. Similarly, the referent 
of 'this sentence' can be virtually anything but that sentence. Sentences of the form 
exemplified by 'This sentence no verb.' and 'This sentence has a verb.' are not well- 
formed: they commit fallacies of logical type equivocation. Thus their self -referential 
character is not genuine and they present no problem as paradoxes. 

On Self-Referential Sentences 


There will always be people around who will object in this manner, and in the 
Brabnerian manner. Such people think it is possible to draw a sharp line between 
attributes of a printed sentence that can be considered part of its form (e.g., the typeface it 
is printed in, the number of words it contains, and so on), and attributes that can be 
considered part of its content (i.e., the things and events and relationships that it refers 

Now, I am used to thinking about language in terms of how to get a machine to deal 
with it, since I look at the human brain as a very complex machine that can handle 
language (and many other things as well). Machines, in trying to make sense of 
sentences, have access to nothing more than the, form of such sentences. The content, if it 
is to be accessible to a machine, has to be derived, extracted, constructed, or created 
somehow from the sentence's physical structure, together with other knowledge and 
programs already available to the machine. 

When very simple processing is used to operate on a sentence, it is convenient to label 
the information thus obtained "syntactic". For instance, it is clearly a syntactic fact about 
"This sentence no verb." that it contains six vowels. The vowel-consonant distinction is 
obviously a typographical one, and typographical facts are considered superficial and 
syntactic. But there is a problem here. With different depths of processing, aspects of 
different degrees of "semanticity" may be detected. 

Consider, for example, the sentence "Mary was sick yesterday." Let's call it Sentence 
M. Listed below are the results of seven different degrees of processing of Sentence M by 
a hypothetical machine, using increasingly sophisticated programs and increasingly large 
knowledge bases. You should think of them as being English translations, for your 
convenience, of computational structures inside the machine that it can act on and use 

1. Sentence M contains twenty characters. 

2. Sentence M contains four English words. 

3. Sentence M contains one proper noun, one one adverb, in that order. . 

4. Sentence,M contains one human's name, one linking verb, one adjective describing 

a potential health state of a living being, and one temporal adverb, in that order. 

5. The subject of Sentence M is a pointer to an individual named v Mary', the predicate 

is an ascription of ill health to the individual so indicated, on the day preceding the 
statement's utterance. 

6. Sentence M asserts that the health of an individual named 'Mary' was not good the 

day before today. 

7. Sentence M says that Mary was sick yesterday, verb, one adjective, and 

Just where is the boundary line that says, "You can't do that much processing!"? 
A machine that could go as far as version 7 would have 

On Self-Referential Sentences 


actually understood-at least in some rudimentary sense-the content of Sentence M. Work 
by artificial-intelligence researchers in the field of natural language understanding has 
produced some very impressive results along these lines, considerably more sophisticated 
than what is shown here. Stories can be "read" and "understood", at least to the extent 
that certain kinds of questions can be answered by the machine when it is probed for its 
understanding. Such questions can involve information not explicitly in the story itself, 
and yet the machine can fill in the missing information and answer the question. 

I am making this seeming digression on the processing of language by computers 
because intelligent people like Mr Flash qFiasco seem to have failed to recognize that the 
boundary line between form and content is as blurry as that between blue and green, or 
between human and ape. This comparison is not made lightly. Humans are supposedly 
able to get at the "content" of utterances, being genuine language-users, while apes are 
not. But ape-language research clearly shows that there is some kind of in-between 
world, where a certain degree of content can be retrieved by a being with reduced mental 
capacity. If mental capacity is equated with potential processing depth, then it is obvious 
why it makes no sense to draw an arbitrary boundary line between the form and the 
content of a sentence. Form blurs into content as processing depth increases. Or, as I 
have always liked to say, "Content is just fancy form." By this I mean, of course, that 
"content" is just a shorthand way of saying "form as perceived by a very fancy apparatus 
capable of making complex and subtle distinctions and abstractions and connections to 
prior concepts". 

Flash qFiasco's down-home, commonsense distinction between form and content 
breaks down swiftly, when analyzed. His charming image of someone making a 
"category error" by throwing a recipe in with the flour and butter and eggs reveals that he 
has never had Recipe Cake. This is a delicious cake whose batter is made out of cake 
recipes (if you use pie recipes, it won't taste nearly as good). The best results are had if 
the recipes are printed in French, in Baskerville Roman. A preponderance of accents 
aigus lends a deliciously piquant aroma to the cake. My recommendation to Brabner and 
qFiasco is: "Let them eat recipes." 

* * * 

Finally, I come to John Case, a computer scientist who wrote from Yale, insisting 
that there is no conceptual problem whatsoever in translating the French sentence "Cette 
phrase en franfais est dicile a traduire en anglais " into English. Case's translation was 
the following English sentence: 

The French sentence "Cette phrase en franfais est difficile a traduire en anglais" 
is difficult to translate into English. 

On Self-Referential Sentences 


In other words, Case translates a ^//-referential French sentence into an other- referential 
English sentence. The English sentence talks about the French sentence-in fact it quotes it 
completely! Something radical is missing here. At one level, of course, Case is right: now 
the two sentences, one French and one English, both are talking about (or pointing to) the 
same thing (the French sentence). But the absolute crux of the French one is its 
tangledness; the English one completely lacks that quality. Clearly Case has had to make 
a sacrifice, a compromise. 

The alternative, which I prefer, is to construct in English an analogue to the 
French sentence: a ^//-referential English sentence, one that has a tangledness 
isomorphic to that of the French sentence. That's where the essence of the sentence lies, 
after all! "But is that its translation ?" you might ask. A good question. 

lonesco once remarked, "The French for London is Paris." (Use-mention fanatic 
that I am, I assume that he meant "The French for "London' is "Paris' ", although it is 
pungent either way.) What he meant was that in understanding situations, French people 
tend to translate them into their own frame of reference. This is of course true for all of 
us. If Mary tells Ann, "My brother died", and if Ann does not know Mary's brother, then 
how can she understand this statement? Surely projection is of the essence: Ann will 
imagine her own brother dying (if she has one-and if not, then her sister, a good friend, 
possibly even a pet!). This alternate frame of reference allows Ann to empathize with 
Mary. Now if Ann did know Mary's brother somewhat, then she might flicker between 
thinking of him as the person she vaguely remembers and thinking of her own brother 
(friend, pet, or whatever) dying. This dilemma (discussed further in the postscript to 
Chapter 24) arises for all beings with their own preferred vantage points: Do I map things 
into what they would be for me, or do I stand apart and survey them completely 
objectively and impassively? 

Case is advocating the latter, which is all very well as an intellectual stance to 
adopt, but when it comes to real life, it just won't cut the mustard. To be concrete, one 
might ask: What was the actual solution used in the French edition of Scientific American 
? The answer, surprising no one, I hope, was this: "This English sentence is difficult to 
translate into French." I rest my case. 

* * * 

I wonder what literalists like John Case would suggest as the proper translation of 
the title of the book All the President's Men (a book about the downfall of President 
Nixon, a downfall that none of the people around him could prevent). Would they say 
that Tous les hommes du President fills the bill admirably? Back-translated rather 
literally, it means "All the men of the President". It completely lacks the allusion -the 
reference by similarity of form-to the nursery rhyme "Humpty Dumpty". Is that 
dispensable? In my 

On Self-Referential Sentences 


opinion, hardly. To me, the essence of the title resides in that allusion. To lose that 
allusion is to deflate the title totally. 

Of course, what do I mean by "that allusion"? Do I wish the French title to 
contain, somehow, an allusion to an English nursery rhyme? That would be rather 
pointless. Well, then, do I want the French title to allude to the French version of 
"Humpty Dumpty"? It all depends how well known that is. But given that Humpty 
Dumpty is practically an unknown figure to French-speaking people, it seems that 
something else is wanted. Any old French nursery rhyme? Obviously not. The critical 
allusion is to the lines "All the King's horses/ And all the King's men/ Couldn't put 
Humpty together again." Are there-anywhere in French literature-lines with a similar 
import? If not, how about in French popular songs? In French proverbs? Fairy tales? 

One might well ask why French-speaking people would ever care about reading a 
book about Watergate in the first place. And even if they did want to read it, shouldn't it 
be completely translated, so that it happens in a French-speaking city? Come to think of 
it, didn't loratno once remark that the French for Washington is Montreal? 

Clearly, this is carrying things to an extreme. There must be some middle ground 
of reasonableness. These are matters of subtle judgment, and they are where being human 
and flexible makes all the difference. Rigid rules about translation may lead you to a kind 
of mechanical consistency, but at the sacrifice of all depth and charm. The problem of 
self -referential sentences is just the tip of the iceberg, as far as translation is concerned. It 
is just that these issues show up very early when direct self-reference is concerned. When 
self -reference (or reference in general, for that matter) is indirect, mediated by form, then 
fluidity is required. The understanding of such sentences involves a mixture of deriving 
the content and yet retaining the form in mind, letting qualities of the form conjure up 
flavors and enhance the meaning with a halo of not-quite-conscious pseudo-meanings, 
connotations, flavors, that flicker in the mind, not quite in reach, not quite out of reach. 
Self -reference is a good starting point for investigation of this kind of issue, because it is 
so much on the surface there. You can't sweep the problems under the rug, even though 
some would like to do so. 

* * * 

This first column, together with this postscript, provides a good introduction to 
the book as a whole, because many central issues are touched on: codes, translation, 
analogies, artificial intelligence, language and machines, mind and meanings, self and 
identity, form and content- all the issues I originally was motivated by when first writing 
that collection of teasing self-referential sentences. 

On Self-Referential Sentences 



Self-Referential Sentences: 
A Follow-Up 

January, 1982 

January has rolled around again, I thought I'd give a follow-up to my column of a 
year ago on self-referential sentences, and that is what this column is; however, before we 
get any further, I would like to take advantage of this opening paragraph to warn those 
readers whose sensibilities are offended by explicit self-referential material that they 
probably will want to quit reading before they reach the end of this paragraph, or for that 
matter, this sentence-in fact, this clause-even this noun phrase-in short, this. 

Well, now that we've gotten that out of the way, I would like to say that, since last 
January, I have received piles upon piles of self-referential mail. Tony Durham astutely 
surmised: "What with the likely volume of replies, I should not think you are reading this 
in person." John C. Waugh's letter yelped: "Help, I'm buried under an avalanche of 
reader's responses!" At first, I thought Waugh himself was empathizing with my plight, 
putting words into my own mouth, but then I realized it was his letter calling for help. 
Fortunately, it was rescued, and now is comfortably nestled in a much reduced pile. 
Indeed, I have had to cull from that massive influx of hundreds of replies a very small 
number. Here I shall present some of my favorites. 

Before leaving the topic of mail, I would like to point out that the postmark on 
Ivan Vince's postcard from Britain cryptically remarked, "Be properly addressed." Was 
this an order issued by the post office to the postcard itself? If so, then British postcards 
must be far more intelligent than American ones; I have yet to meet a postcard that could 
read, let alone correct its own address. (One postcard that reached me was addressed to 
me in care of Omni magazine! And yet somehow it arrived.) 

I was flattered by a couple of self-undermining compliments. Richard Ruttan 
wrote, "I just can't tell you how much I enjoyed your first article.", and John Collins said, 
"This does not communicate my delight at January's column." I was also pleased to learn 
that my fame had spread as far as the men's room at the Tufts University Philosophy 
Department, where Dan 

Self-Referential Sentences: A Follow-Up 


Dennett discovered "This sentence is graffiti. -Douglas R. Hofstadter" penned on the 

* * * 

A popular pastime was the search for interesting self-answering questions. 
However, only a few succeeded in genuinely ' jootsing" (jumping out of the system), 
which, to me, means being truly novel. It seems that successes in this limited art form are 
not easy to come by. John Flagg cynically remarked (I paraphrase slightly): "Ask a self- 
answering question, and get a self-questioning answer." One of my favorites was given 
by Henry Taves: "I fondly remember a history exam I encountered in boarding school 
that contained the following: 'IV. Write a question suitable for a final exam in this course, 
and then answer it.' My response was simply to copy that sentence twice." I was delighted 
by this. Later, upon reflection, I began to suspect something was slightly wrong here. 
What do you think? 

Richard Showstack contributed two droll self-answering questions: "What 
question no verb?" and "What is a question that mentions the word 'umbrella' for no 
apparent reason?" Jim Shiley sent in a clever entry that I modify slightly into "Is this a 
rhetorical question, or is this a rhetorical question?" He also contributed the following 

Take a blank sheet of paper and on it write: 

How far across the page will this sentence run? 

Now if some polyglot friend of yours points out that the same string of phonemes 
in Ural-Altaic means '2.3 inches', send me a free subscription to Scientific 
American. Otherwise, if the inscription of a question counts both as the question 
and as unit of measure, I at least get a booby prize. But I think somehow I bent the 

My own solutions to the problem of the self-answering question are actually not 
so much self-answering as self -provoking, as in the following example: "Why are you 
asking me that out of the blue?" It is obvious that when the question is asked out of the 
blue, it might well elicit an identical response, indicating the hearer's bewilderment. 

Philip Cohen relayed the following anecdote about a self-answering question, 
from Damon Knight: "Terry Carr, an old friend, sent us a riddle on a postcard, then the 
answer on another postcard. Then he sent us another riddle: v How do you keep a turkey 
in suspense?' and never sent the answer. After about two weeks, we realized that was the 

* * * 

Several of the real masterpieces sent in belong to what I call the self-documenting 
category, of which a simple example is Jonathan Post's "This 

Self-Referential Sentences: A Follow-Up 


sentence contains ten words, eighteen syllables and sixty-four letters." A neat twist is 
supplied by John Atkins in his sentence " 'Has eighteen letters' does." The self- 
documenting form can get much more convoluted and introspective. An example by the 
wordplay master Howard Bergerson was brought to my attention by Philip Cohen. It 


In this sentence, the word and occurs twice, the word eight occurs twice, the word 
four occurs twice, the word fourteen occurs four times, the word in occurs twice, 
the word seven occurs twice, the word the occurs fourteen times, the word this 
occurs twice, the word times occurs seven times, the word twice occurs eight times 
and the word word occurs fourteen times. 

That is good, but the gold medal in the category is reserved for Lee Sallows, who 
submitted the following tour deforce: 

Only the fool would take trouble to verify that his sentence was composed of ten 
a's, three b's, four c's, four d's, forty-six e's, sixteen f s, four g's, thirteen h's, fifteen 
i's, two k's, nine l's, four m's, twenty-five n's, twenty-four o's, five p's, sixteen r's, 
forty-one s's, thirty-seven t's, ten u's, eight v's, eight w's, four x's, eleven y's, 
twenty-seven commas, twenty-three apostrophes, seven hyphens, and, last but not 
least, a single ! 

I (perhaps the fool) did take trouble to verify the whole thing. First, though, I 
carried out some spot checks. And I must say that when the first random spot check 
worked (I think I checked the number of v g's), this had a strong psychological effect: all 
of a sudden, the credibility rating of the whole sentence shot way up for me. It strikes me 
as weird (and wonderful) how, in certain situations, the verification of a tiny percentage 
of a theory can serve to powerfully strengthen your belief in the full theory. And perhaps 
that's the whole point of the sentence! 

The noted logician Raphael Robinson submitted a playful puzzle in the self- 
documenting genre. Readers are asked to complete the following sentence: 

In this sentence, the number of occurrences of is , of 1 is , of 2 is , of 3 is 

, of 4 is , of 5 is _, of 6 is , of 7 is , of 8 is , and of 9 is _. 

Each blank is to be filled with a numeral of one or more digits, written in decimal 
notation. Robinson states that there are exactly two solutions. Readers might also search 
for two sentences of this form that document each other, or even longer loops of that 

Clearly the ultimate in self-documentation would be a sentence that does more 
than merely inventory its parts; it would be a sentence that includes a rule as well, telling 
all the King's men how to put those parts back together again to create a full sentence-in 
short, a self -reproducing sentence. Such 

Self-Referential Sentences: A Follow-Up 


a sentence is Willard Van Orman Quine's English rendition of Kurt Godel's classic 
metamathematical homage to Epimenides the Cretan: 

"yields falsehood when appended to its quotation." yields falsehood when 
appended to its quotation. 

Quine's sentence in effect tells the reader how to construct a replica of the sentence being 
read, and then (just for good measure) adds that the replica (not itself for heaven's sake!) 
asserts a falsity! It's a bit reminiscent of the famous remark made by Epilopsides the 
Concretan (second cousin of Epimenides) to Flora, a beautiful young woman whose 
ardent love he could not return (he was betrothed to her twin sister Fauna): "Take heart, 
my dear. I have a suggestion that may cheer you up. Just take one of these cells from my 
muscular biceps here, and clone it. You'll soon wind up with a dashing blade who looks 
and thinks just like me! But do watch out for him- he is given to telling beautiful women 
real whoppers ! " 

* * * 

In the early 1950's, John von Neumann worked hard trying to design a machine 
that could build a replica of itself out of raw materials. He came up with a theoretical 
design consisting of hundreds of thousands of parts. Seen in hindsight and with a 
considerable degree of abstraction, the idea behind von Neumann's self-reproducing 
machine turns out to be pretty similar to the means by which DNA replicates itself. And 
this in turn is close to Godel's method of constructing a self -referential sentence in a 
mathematical language in which at first there seems to be no way of referring to the 
language itself. 

The First Every-Other-Decade Von Neumann Challenge is thus hereby presented 
for ambitious readers: Create a comprehensible and not unreasonably long self- 
documenting sentence that not only lists its parts (at the word level or, better yet, the 
letter level) but also tells how to put them together so that the sentence reconstitutes 
itself. (Notice, by the way, the requirement is that the sentence be not unreasonably long, 
which is different -very different-from being reasonably long.) The parts list (or seed) 
should be an inventory of words or typographical symbols, more or less as in the 
sentences created by Howard Bergerson and Lee Sallows. The inventoried symbols 
should in some way be clearly distinguishable from the text that talks about them. For 
instance, they can be enclosed in quotation marks, printed in another typeface, or referred 
to by name. It is not so important what convention is adopted, so long as the distinction is 
sharp. The rest of the sentence (the building rule) should be printed normally, since it is 
to be regarded not as typographical raw material but as a set of instructions. This is the 
use-mention distinction I discussed in Chapter 1, and to disregard it 

Self-Referential Sentences: A Follow-Up 


is a serious conceptual weakness. (It is a flaw in Sallows' sentence that slightly tarnishes 
the gold on his medal.) 

The building rule may not talk about normally-printed material-only about parts 
of the inventory. Thus, it is not permitted for the building rale to refer to itself in any 
way! The building rale has to describe structure explicitly. Furthermore (and this is the 
subtlest and probably the most often overlooked aspect of self -reference), the building 
rule must specify which parts are to be printed normally and which parts in quotes (or 
however the raw materials are being indicated). In this respect, Bergerson's sentence fails. 
Although, to its credit, it sharply distinguishes between use and mention by relying on 
upper case for the names of inventory items and lower case for item counts and filler 
words, it does not have separate inventories for items in upper case and lower case. 
Instead it lumps the two together, blurring a vital distinction. 

In the Von Neumann Challenge, extra points will be awarded for solutions given 
in Basic English, or whose seed is entirely at the letter level (as in Sallows' sentence). 
The Quine sentence, although it clearly incorporates a seed (the seven-word phrase in 
quotation marks) and a building rule (that of appending something to its quotation), is not 
a legal entry because its seed is too far from being raw material. It is so structured that it 
is like a fetus more than it is like a zygote. 

* * * 

There is a very good reason, by the way, that the Quine sentence's seed is so 
complicated-in fact, is identical with the building rule, except for the quotation marks. 
The reason is simple to state: You've got to build a copy of the building rule out of raw 
materials, and the more your building rule looks like your seed, the simpler it will be to 
build a copy of it from a copy of the seed. To make a full new sentence, all you need to 
do is make two copies of the seed, carry out whatever simple manipulations will convert 
one copy of the seed into the building rule, and then splice the other copy of the seed onto 
the newly minted building rule to make up a complete new sentence, fresh off the 
assembly line. 

To make this clearer, it is helpful to show a slight variation on Quine's sentence. 
Imagine that you could recognize only the lowercase roman letters, and that uppercase 
letters were alien to you. Then text printed in upper case would, for all practical purposes, 
be devoid of meaning or interest, whereas text in lower case would be full of meaning 
and interest, able to suggest ideas or actions in your mind. Now suppose someone gave 
you a conversion table that matched each uppercase letter with its lowercase counterpart, 
so that you could "decode" uppercase text. Then one day you came across this piece of 
"meaningless" uppercase text: 

Self-Referential Sentences: A Follow-Up 



On being decoded, it would yield a lowercase sentence, or rather, a lowercase sentence 
fragment-a predicate without a subject. Suggestive, eh? What might you try out, as a 
possible subject of that predicate? 

This notion of two parallel alphabets, one in which text is inert and meaningless 
and the other in which text is active and meaningful, may strike you as yielding no more 
than a minor variation on Quine's sentence, but in fact it is very similar to an exceedingly 
clever trick that nature discovered and has exploited in every cell of every living 
organism. Our seed-our genome-our DNA-is a huge long volume of inert text written in a 
chemical alphabet that has 64 "uppercase" letters (codons). Our building rules-our 
enzymes-are short, pithy slogans of active text written in a different chemical alphabet 
that has just twenty "lowercase" letters (amino acids). There is a map (the genetic code) 
that converts uppercase letters into lowercase ones. Obviously, some lowercase letters 
must correspond to more than one uppercase letter, but here that is a detail. It also turns 
out that three characters of the uppercase alphabet are not letters but punctuation marks 
telling where one pithy slogan ends and the next one begins-but again, these are details. 
(See Chapter 27 for some of those details.) 

Once you know this mapping, you often won't even remember to distinguish 
between the two chemical alphabets: the inert uppercase codon alphabet and the active 
lowercase amino acid alphabet. The main thing is that, armed with the genetic code, you 
can read the DNA book (seed) as if it were a sequence of enzyme slogans (building rules) 
telling how to write a new DNA book together with a new set of enzyme slogans! It is a 
perfect parallel to our variation on the Quine sentence, where inert, uppercase seed-text 
was, converted into active, lowercase rule-text that told how to make a copy of the full 
Quine sentence, given its seed. 

A cell's DNA and enzymes act like the seed and building rules of Quine's 
sentence, or the parts list and building rules of von Neumann's self-reproducing 
automaton-or then again, like the seed and building rules of computer programs that print 
themselves out. It is amazing how universal this mechanism of self-reference is, and for 
that reason I always find it quaint that people who rant and rave against the silliness of 
self-reference are themselves composed of trillions and trillions of tiny self-referential 

* * * 

Scott Kim and I constructed an intriguing pair of sentences: 

The following sentence is totally identical with this one, except that the words 
"following' and "preceding' have been exchanged, as have the words 'except' and 
"in', and the phrases "identical with' and "different from'. 

Self-Referential Sentences: A Follow-Up 


The preceding sentence is totally different from this one, in that the words 
preceding' and 'following' have been exchanged, as have the words v in' and except', 
and the phrases 'different from' and 'identical with'. 

At first glance, these sentences are reminiscent of a two-step variant on the 
Epimenides paradox ("The following sentence is true."; "The preceding sentence is 
false."). On second glance, though, they are seen to say exactly the same thing. 
Curiously, my Australian colleague and sometime alter ego, Egbert B. Gebstadter, 
writing in his ever fascinating but often-furiating monthly row "Thetamagical Memas" 
(which appears in Literary Australian), disagrees with me; he maintains they say totally 
different things. (See figure 2-1.) 

Not surprisingly, several of the sentences submitted by readers had a paradoxical 
flavor. Some were variants on Bertrand Russell's paradox about the barber who shaves all 
those who do not shave themselves, or the set of all sets that do not include themselves as 
elements. For instance, Gerald Hull concocted this strange sentence: "This sentence refers 
to every sentence that does not refer to itself." Is Hull's concoction self-referential, or is it 
not? In a similar vein, Michael Gardner cited a Reed College senior thesis whose 
dedication ran: "This thesis is dedicated to all those who did not dedicate their theses to 
themselves." The book Model Theory, by C. C. Chang and H. J. Keisler, bears a similar 
dedication, as Charles Brenner pointed out to me. He also suggested another variant on 
Russell's paradox: Write a computer program that prints out a list of all programs that do 
not ever print themselves out. The question is, of course: Will this program ever print 
itself out? 

One of the most disorienting sentences came from Robert Boeninger: "This 
sentence does in fact not have the property it claims not to have." Got that? A serious 
problem seems to be to figure out just what property it is that the sentence claims it lacks. 

The Dutch mathematician Hans Freudenthal sent along a charming paradoxical 
anecdote based on self -reference: 

There is a story by the eighteenth-century German Christian Gellert called 
"Der Bauer and sein Sohn" ("The Peasant and His Son"). One day during a walk, 
when the son tells a big lie, his father direly warns him about the "Liars' Bridge", 
which they are approaching. This bridge always collapses when a liar walks across 
it. After hearing this frightening warning, the boy admits his lie and confesses the 

When I [Freudenthal] told a ten- year-old boy this story, he asked me what 
happened when they eventually came to the bridge. I replied, "It collapsed under 
the father, who had lied, since in fact there is no Liars' Bridge." (Or did it?) 

C. W. Smith, writing from London, Ontario, described a situation reminiscent of 
the Epimenides paradox: 

Self-Referential Sentences: A Follow-Up 


Thetamagical Memas 

Seeking the Whence 
of Letter and Spirit 


A Copious Concatenation of 
Artsy, Scientistic, and LiterL Mumbo- Jumbo 

FIGURE 2-1. The cover of Egbert B. Gebstadter's latest book, showing some of his " 
Whorly Art. " See the Bibliography for a short description of the book. 

Gebstadter, best known as the author of Copper, Silver, Gold: an Indestructible 
Metallic Alloy, also co-edited The Brain's U with Australian philosopher Denial E. 
Dunnitt, and for two and a ha f years wrote a monthly row ("Thetamagical Memas ")for 
Literary Australian. Having spent the last several years in the Psychology Department of 
Pakistania University in Willington, Pakistania, he has recently joined the faculty of the 
Computer Science Department of the University of Mishuggan in Tom Treeline, 
Mishuggan, where he occupies the Rexall Chair in the College ofArt, Sciences, and 
Letters. His current research projects in IA (intelligent artifice) are called Quest- 
Essence, Mind Pattern, Intellect, and Studio. His focus is on deterministic sequential 
models of digital emotion. 

Self-Referential Sentences: A Follow-Up 


During the 1960's, standing alone in the midst of a weed-'strewn field in this city, 
there was a weathered sign that read: -$25 reward for information leading to the arrest 
and conviction of anyone removing this sign." For whatever it's worth, the sign has long 
since disappeared. And so, for that matter, has the field. 

Incidentally, the Epimenides paradox should not be confused with the Nixonides 
paradox, first uttered by Nixonides the Cretin in A.D. 1974: "This statement is 
inoperative." Speaking of Epimenides, one of the most elegant variations on his paradox 
is the "Errata" section in a hypothetical book described by Beverly Rowe. It looks like 


Page (vi): For Errata, read Erratum 

Closely related to the truly paradoxical sentences are those that belong to what I 
call the neurotic and healthy categories. A healthy sentence is one that, so to speak, 
practices what it preaches, whereas a neurotic sentence is one that says one thing while 
doing its opposite. Alan Auerbach has given us a good example in each category. His 
healthy sentence is: "Terse!" His neurotic sentence is: "Proper writing-and you've heard 
this a million times -avoids exaggeration." Here's a healthy one by Brad Shelton: 
"Fourscore and seven words ago, this sentence hadn't started yet." One of the jootsingest 
of sentences came from Carl Bender: 

The rest of this sentence is written in Thailand, on 

Consider a related sentence sent in by David Stork: "It goes without saying 

Self-Referential Sentences: A Follow-Up 


that ..." To which category does it belong? Perhaps it is a psychotic sentence. 

Pete Maclean contributed a puzzling one: "If the meanings of v true' and 'false' 
were switched, then this sentence wouldn't be false." I'm still scratching my head over 
what that means! Dan Krimm wrote to tell me: "I've heard that this sentence is a rumor." 
Linda Simonetti contributed the following example, "which actually is not a complete 
sentence, but merely a subordinate clause." Douglas Wolfe offered the following neurotic 
rule of thumb: "Never use the imperative, and it is also never proper to construct a 
sentence using mixed moods." David Moser reminded me of a slogan that the National 
Lampoon once used: "So funny it sells without a slogan!" Perry Weddle wrote, "I'm 
trying to teach my parrot to say, 'I don't understand a thing I say.' When I say it, it's 
viciously self-referential, but in his case?" Stephen Coombs pointed out that "A sentence 
may self-refer in the verb." My mother, Nancy Hofstadter, heard Secretary of State 
Alexander Haig describe a warning message to the Russians as "a calculated ambiguity 
that would be clearly understood". Yes, Sir! 

Jim Propp submitted a sequence of sentences that slide elegantly from the 
neurotically healthy to the healthily neurotic: 

(1) This sentence every third, but it still comprehensible. 

(2) This would easier understand fewer had omitted. 

(3) This impossible except context. 

(4) 4'33" attempt idea. 

The penultimate sentence refers to John Cage's famous piece of piano music consisting of 
four minutes and 33 seconds of silence. The last sentence might well be an excerpt from 
The Wit and Wisdom of Spiro T. Agnew, although it is too short an excerpt to be sure. 
Propp also sent along the following healthy sentence, which was apparently inspired by 
his readings in the book Intelligence in Ape and Van, by David Premack: "By the 
'productivity' of language, I mean the ability of language to introduce new words in terms 
of old ones." 

Philosopher Howard DeLong contributed what might be considered a neurotic 

All invalid syllogisms break at least one rule. 
This syllogism breaks at least one rule. 
Therefore, this syllogism is invalid. 

Several readers pointed out phrases and jokes that have been making the rounds. D.A. 
Treissman, for instance, reminded me that "Nostalgia ain't what, it used to be." Henry 
Taves mentioned the delightful T-shirts adorned 

Self-Referential Sentences: A Follow-Up 


with statements such as "My folks went to Florida and all they brought back for me was 
this lousy T-shirt!" And John Fletcher described an episode of the television program 
Laugh-In a few years ago on which Joanne Worley sang, "I'm just a girl who can't say 'n . 
. .', 'n . . .', n ...' ". John Healy wrote, "I used to think I was indecisive, but now I'm not so 

I myself have a few contributions to this collection. A neurotic one is: "In this 
sentence, the concluding three words 'were left out'." Or is it neurotic? These things 
confuse me! In any case, a most healthy sentence is: "This sentence offers its reader(s) 
various alternatives/options that he or she (or they) is (are) free to accept and/or reject." 
And then there is the inevitable "This sentence is neurotic." The thing is, if it is neurotic, 
it practices what it preaches, so it's healthy and therefore cannot be neurotic-but then if it 
isn't neurotic, it's the opposite of what it claims to be, so it's got to be neurotic. No 
wonder it's neurotic, poor thing! 

Speaking of neurotic sentences, what about sentences with identity crises? These 
are, in some sense, the most interesting ones of all to me. A typical example is Dan 
Krimm's vaguely apprehensive question, "If I stated something else, would it still be 
me?" I thought this could be worded better, so I revised it slightly, as follows: "If I said 
something else, would it still be me saying it?" I still was not happy, so I wrote one more 
version: "In another world, could I have been a sentence about Humphrey Bogart?" When 
I paused to reflect on what I had done, I realized that in reworking Dan's sentence, I had 
tampered with its identity in the very way it feared. The question remained, however: 
Were all these variants really the same sentence, deep down? My last experiment along 
these lines was: "In another world, could this sentence have been Dan Krimm's 

Clearly some readers were thinking along parallel lines, since John Atkins 
queried, "Can anyone explain why this would still be the same magazine without this 
query, and yet this would not be the same query without this word?" (Of course, just 
which word "this word" refers to is a little vague, but the idea is clear.) And Loul 
Mcintosh, who works at a rehabilitation center for formerly schizophrenic patients, had a 
question connecting personal identity with self -referential sentences: "If I were you, who 
would be reading this sentence?" She then added: "That's what I get for working with 
schizophrenics." This brings me to Peter M. Brigham, M.D., who in his work ran across a 
severe case of literary schizophrenia: "You have, of course, just begun v reading the 
sentence that you have just finished reading." It's one of my favorites. 

Pursuing the slithery snake of self in his own way, Uilliam M. Bricken, Jr., wrote 
in: "If you think this sentence is confusing, then change one pig." Now, anyone can see 
that this doesn't make any sense at all. Surely what he meant was, "If you think this 
sentence is confusing, then roast one pig. "don't ewe agree? By the by, if ewe think 
"Uilliam" is confusing, then roast one ewe. And while we're mentioning ewes, what's a 
nice word like "ewe" doing in a foxy paragraph like this? 

Self-Referential Sentences: A Follow-Up 


A while back, driving home late at night, I tuned in to a radio talk show about 
pets. A heated discussion was taking place about the relative merits of various species, 
and at one point the announcer mused, "If a dog had written this broadcast, he might have 
said that people are inferior because they don't wag their tails." This gave me paws for 
thought: What might this column have been like if it had been written by a dog? I can't 
say for sure, but I have a hunch it would have been about chasing squirrels. And it might 
have had a paragraph speculating about what this column would have been like if it had 
been written by a squirrel. 

I think my favorite of all the sent-in-ces was one contributed by Harold Cooper. 
He was inspired by Scott Kim's counterfactual self-referential question: "What would this 
sentence be like if n were 3?" His answer is shown in Figure 2-2. This, to me, exemplifies 
the meaning of the verb 

FIGURE 2-2. A counterfactual self-referential sentence, inspired by Harold Cooper and 
Scott Kim. 

"foots". The six-sided Vs represent the fact that the ratio of the circumference to the 
diameter of a hexagon is 3. Clearly, in Cooper's mind, if n were 3, why, what more 
natural conclusion than that circles would be hexagonsl Who could ever think otherwise? 
I was intrigued by the fact that, as jt's value slipped to 3, not only did circles turn into 
hexagons, but also the interrogative mood slipped into the declarative mood. Remember 
that the question asked how the question itself would be in that strange subjunctive 
world. Would it lose its curiosity about itself and cease to be a question? I did not see 
why that personality trait of the sentence would be affected by the value of a. On the 
other hand, it seemed obvious to me that if n were 3, the antecedent of the conditional 
should no longer be subjunctive. In fact, rather than saying "if 71 were 3", it should say, 
"because n is 3" (or something to that effect). Putting my thoughts together, then, I came 
up with a slight variation on Cooper's sentence: "What is this sentence like, n being 3 (as 


Self-Referential Sentences: A Follow-Up 


Several readers were interested in sentences that refer to the language they are in 
(or not in, as the case may be). An example is "If you spoke English, you'd be in your 
home language now." Jim Propp sent in a delightful pair of such sentences that need to be 
read together: 

Cette phrase se refere a elle-meme, mais d'une maniere peu evidente a la plupart 
des Americains. 

Plim glorkle pegram ut replat, trull gen ris clanter froat veb nup lamerack gla smurp 

If you do not understand the first sentence, just get a Martian friend to help you decode 
the second one. That will provide hints about the first. (I apologize for leaving off the 
proper Martian accent marks, but they were not available in this typeface.) 

* * * 

Last January, I published several sentences by David Moser and mentioned that 
he had written an entire story consisting of self-referential sentences. Many readers were 
intrigued. I decided there could be no better way to conclude this column than to print 
David's story in its entirety. So here 'tis! 

This Is the Title of This Story, 
Which Is Also Found Several Times in the Story Itself 

This is the first sentence of this story. This is the second sentence. This is the title 
of this story, which is also found several times in the story itself. This sentence is 
questioning the intrinsic value of the first two sentences. This sentence is to inform you, 
in case you haven't already realized it, that this is a self -referential story, that is, a story 
containing sentences that refer to their own structure and function. This is a sentence that 
provides an ending to the first paragraph. 

This is the first sentence of a new paragraph in a self-referential story. This 
sentence is introducing you to the protagonist of the story, a young boy named Billy. This 
sentence is telling you that Billy is blond and blue-eyed and American and twelve years 
old and strangling his mother. This sentence comments on the awkward nature of the 
self-referential narrative form while recognizing the strange and playful detachment it 
affords the writer. As if illustrating the point made by the last sentence, this sentence 
reminds us, with no trace of facetiousness, that children are a precious gift from God and 
that the world is a better place when graced by the unique joys and delights they bring to 

This sentence describes Billy's mother's bulging eyes and protruding 

Self-Referential Sentences: A Follow-Up 


tongue and makes reference to the unpleasant choking and gagging noises she's 
making. This sentence makes the observation that these are uncertain and difficult times, 
and that relationships, even seemingly deep-rooted and permanent ones, do have a 
tendency to break down. 

Introduces, in this paragraph, the device of sentence fragments. A sentence 
fragment. Another. Good device. Will be used more later. 

This is actually the last sentence of the story but has been placed here by mistake. 
This is the title of this story, which is also found several times in the story itself. As 
Gregor Samsa awoke one morning from uneasy dreams he found himself in his bed 
transformed into a gigantic insect. This sentence informs you that the preceding sentence 
is from another story entirely (a much better one, it must be noted) and has no place at all 
in this particular narrative. Despite the claims of the preceding sentence, this sentence 
feels compelled to inform you that the story you are reading is in actuality "The 
Metamorphosis" by Franz Kafka, and that the sentence referred to by the preceding 
sentence is the only sentence which does indeed belong in this story. This sentence 
overrides the preceding sentence by informing the reader (poor, confused wretch) that 
this piece of literature is actually the Declaration of Independence, but that the author, in 
a show of extreme negligence (if not malicious sabotage), has so far failed to include 
even one single sentence from that stirring document, although he has condescended to 
use a small sentence fragment, namely, "When in the course of human events", embedded 
in quotation marks near the end of a sentence. Showing a keen awareness of the boredom 
and downright hostility of the average reader with regard to the pointless conceptual 
games indulged in by the preceding sentences, this sentence returns us at last to the 
scenario of the story by asking the question, "Why is Billy strangling his mother?" This 
sentence attempts to shed some light on the question posed by the preceding sentence but 
fails. This sentence, however, succeeds, in that it suggests a possible incestuous 
relationship between Billy and his mother and alludes to the concomitant Freudian 
complications any astute reader will immediately envision. Incest. The unspeakable 
taboo. The universal prohibition. Incest. And notice the sentence fragments? Good 
literary device. Will be used more later. 

This is the first sentence in a new paragraph. This is the last sentence in a new 

This sentence can serve as either the beginning of the paragraph or the end, 
depending on its placement. This is the title of this story, which is also found several 
times in the story itself. This sentence raises a serious objection to the entire class of self- 
referential sentences that merely comment on their own function or placement within the 
story (e.g., the preceding four sentences), on the grounds that they are monotonously 
predictable, unforgivably self-indulgent, and merely serve to distract the reader from the 
real subject of this story, which at this point seems to concern strangulation and incest 
and who knows what other delightful 

Self-Referential Sentences: A Follow-Up 


topics. The purpose of this sentence is to point out that the preceding sentence, while not 
itself a member of the class of self-referential sentences it objects to, nevertheless also 
serves merely to distract the reader from the real subject of this story, which actually 
concerns Gregor Samsa's inexplicable transformation into a gigantic insect (despite the 
vociferous counterclaims of other well-meaning although misinformed sentences). This 
sentence can serve as either the beginning of a paragraph or the end, depending on its 

This is the title of this story, which is also found several times in the story itself. 
This is almost the title of the story, which is found only once in the story itself. This 
sentence regretfully states that up to this point the self-referential mode of narrative has 
had a paralyzing effect on the actual progress of the story itself-that is, these sentences 
have been so concerned with analyzing themselves and their role in the story that they 
have failed by and large to perform their function as communicators of events and ideas 
that one hopes coalesce into a plot, character development, etc. -in short, the very raisons 
d'etre of any respectable, hardworking sentence in the midst of a piece of compelling 
prose fiction. This sentence in addition points out the obvious analogy between the plight 
of these agonizingly self-aware sentences and similarly afflicted human beings, and it 
points out the analogous paralyzing effects wrought by excessive and tortured self- 

The purpose of this sentence (which can also serve as a paragraph) is to speculate 
that if the Declaration of Independence had been worded and structured as lackadaisically 
and incoherently as this story has been so far, there's no telling what kind of warped 
libertine society we'd be living in now or to what depths of decadence the inhabitants of 
this country might have sunk, even to the point of deranged and debased writers 
constructing irritatingly cumbersome and needlessly prolix sentences that sometimes 
possess the questionable if not downright undesirable quality of referring to themselves 
and they sometimes even become run-on sentences or exhibit other signs of inexcusably 
sloppy grammar like unneeded superfluous redundancies that almost certainly would 
have insidious effects on the lifestyle and morals of our impressionable youth, leading 
them to commit incest or even murder and maybe that's why Billy is strangling his 
mother, because of sentences just like this one, which have no discernible goals or 
perspicuous purpose and just end up anywhere, even in mid 

Bizarre. A sentence fragment. Another fragment. Twelve years old. This is a 
sentence that. Fragmented. And strangling his mother. Sorry, sorry. Bizarre. This. More 
fragments. This is it. Fragments. The title of this story, which. Blond. Sorry, sorry. 
Fragment after fragment. Harder. This is a sentence that. Fragments. Damn good device. 

The purpose of this sentence is threefold: (1) to apologize for the unfortunate and 
inexplicable lapse exhibited by the preceding paragraph; (2) to assure you, the reader, 
that it will not happen again; and (3) to 

Self-Referential Sentences: A Follow-Up 


reiterate the point that these are uncertain and difficult times and that aspects of language, 
even seemingly stable and deeply rooted ones such as syntax and meaning, do break 
down. This sentence adds nothing substantial to the sentiments of the preceding sentence 
but merely provides a concluding sentence to this paragraph, which otherwise might not 
have one. 

This sentence, in a sudden and courageous burst of altruism, tries to abandon the 
self -referential mode but fails. This sentence tries again, but the attempt is doomed from 
the start. 

This sentence, in a last-ditch attempt to infuse some iota of story line into this 
paralyzed prose piece, quickly alludes to Billy's frantic cover-up attempts, followed by a 
lyrical, touching, and beautifully written passage wherein Billy is reconciled with his 
father (thus resolving the subliminal Freudian conflicts obvious to any astute reader) and 
a final exciting police chase scene during which Billy is accidentally shot and killed by a 
panicky rookie policeman who is coincidentally named Billy. This sentence, although 
basically in complete sympathy with the laudable efforts of the preceding action-packed 
sentence, reminds the reader that such allusions to a story that doesn't, in fact, yet exist 
are no substitute for the real thing and therefore will not get the author (indolent goof-off 
that he is) off the proverbial hook. 

Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. 
Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. Paragraph. 

The purpose. Of this paragraph. Is to apologize. For its gratuitous use. Of. 
Sentence fragments. Sorry. 

The purpose of this sentence is to apologize for the pointless and silly adolescent 
games indulged in by the preceding two paragraphs, and to express regret on the part of 
us, the more mature sentences, that the entire tone of this story is such that it can't seem 
to communicate a simple, albeit sordid, scenario. 

This sentence wishes to apologize for all the needless apologies found in this 
story (this one included), which, although placed here ostensibly for the benefit of the 
more vexed readers, merely delay in a maddeningly recursive way the continuation of the 
by-now nearly forgotten story line. 

This sentence is bursting at the punctuation marks with news of the dire import of 
self-reference as applied to sentences, a practice that could prove to be a veritable 
Pandora's box of potential havoc, for if a sentence can refer or allude to itself, why not a 
lowly subordinate clause, perhaps this very clause? Or this sentence fragment? Or three 
words? Two words? One? 

Perhaps it is appropriate that this sentence gently and with no trace of 
condescension remind us that these are indeed difficult and uncertain times and that in 
general people just aren't nice enough to each other, and perhaps we, whether sentient 
human beings or sentient sentences, should just try harder. I mean, there is such a thing 
as free will, there has to be, and this sentence is proof of it! Neither this sentence nor you, 
the reader, is 

Self-Referential Sentences: A Follow-Up 


completely helpless in the face of all the pitiless forces at work in the universe. We 
should stand our ground, face facts, take Mother Nature by the throat and just try harder. 
By the throat. Harder. Harder, harder. 

This is the title of this story, which is also found several times in the story itself. 
This is the last sentence of the story. This is the last sentence of the story. This is 
the last sentence of the story. This is. Sorry. 

Post Scriptum. 

As you can see, there is a vast amount of self-referential material out there in the 
world. To pick only the very best is a monumental task, and certainly a highly subjective 
one. I would like to include here some of the things that I had to omit from the second 
self -reference column with great regret, as well as some of the things that were sent in 
later, in response to it. 

First, though, I would like to mention an amusing incident. When Lee Sallows' 
self-documenting sentence was to be printed in the narrow columns of Scientific 
American, nobody remembered to tell the typesetters not to break any unhyphenated 
words. As luck would have it, two such breaks were introduced, yielding two spurious 
hyphens, thus spoiling (in a superficial sense) the accuracy of his construction. How 
subtly one can get snagged when self -reference is concerned! 

Paul Velleman sent me a copy of the front page of the Ithaca Journal, dated 
January 26, 1981, with a banner headline saying "Ex-hostages enjoy their privacy". He 
wrote, "I think it may be self-referent (and self-contradictory) in a different way than 
your other examples because the medium, positioning, and size of its printing are all 
necessary components of the contradiction." When I looked at the page, I simply saw 
nothing self-referential. I thought maybe I was supposed to look at the flip side, for some 
reason, but that had even less of interest. So I looked back at the headline, and suddenly it 
hit me: How can people "enjoy privacy" when it's being blared across the front page of 
newspapers across the nation? 

Along the same lines, soon thereafter I came across a photograph of Lady Di in 
tears, and in the caption her tears were explained this way: "Lady Di was apparently 
overcome by the strain of the impending royal wedding and having her every move in 
public watched by thousands. See story on page A20. Details on the royal honeymoon, 
page A7." 

John M. Lankford wrote me a long letter from Japan on self-reference, 
remarkably similar in some ways to the one from Flash gFiasco. The most memorable 
paragraph in his letter was the following one: 

Self-Referential Sentences: A Follow-Up 


Here in Japan, twice a week, I teach a little class in English for a group of 
university students-mainly graduate students in the sciences. I spent one class hour 
taking some of your sentences from the Scientific American article, writing them 
on the blackboard, and asking the students what they meant. The students had a 
fairly good command of written English, but they were poor in their command of 
idiom, quick verbal response, and, for want of a better term, "humor of the 
abstract". As I suspected, many of the sentences-perhaps the most interesting of 
them-die when ripped from their cultural context. I had quite a bit of difficulty 
getting across the idea that the pronoun "I" could refer to the sentence as well as to 
the writer of the sentence. Pronouns cause a lot of trouble in Japan. For example, 
when I ask someone, "Am I wearing a blue jacket?", they might frequently reply, 
"Yes, I am wearing a blue jacket." This confusion is easy in Japanese due to the 
relative lack of pronouns in ordinary speech. Of course you can imagine the extra 
layers of incomprehension that would arise in reading your sentences if the 
boundaries between "you" and "I" were rather vague. 

On a visit to Gettysburg, I read Abraham Lincoln's Gettysburg address, and for 
the first time its curious self-reference struck me: "The world will little note nor long 
remember what we say here." Lincoln had no way of knowing at the time, but this would 
turn out to be an extremely false sentence (if it is permissible to speak of degrees of 
falsity). In fact, that sentence itself is a very memorable one. While we're on presidential 
self-reference, listen to this self-descriptive remark by former President Ford: "I am the 
first to admit that I am no great orator or no person that got where I have gotten by any 
William Jennings Bryan technique." I guess that where Lincoln's sentence was extremely 
false, Ford's is extremely true. Here is a final self-referential sentence along presidential 

If John F. Kennedy were reading this sentence, Lee Harvey Oswald would have 

* * * 

One of the best self-answering questions came up naturally in the course of a very 
brief telephone call I made to a restaurant one evening. It went this way: "May I help 
you?" to which I answered, "You've already helped me-by telling me that you're open 
today. Thank you. Bye!" And here's a "self-deferential" sentence by Don Byrd: "I am not 
as witty as my author." 

I received this anonymous letter in the mail: "I received this anonymous letter in 
the mail so I can't credit the author, "-so I can't credit the author. I also received a request 
from someone living in Calgary, Alberta, whose name I forget (but if he's reading this, 
he'll know who he is) who wrote "This is my feeble way of attempting to get my name 
into print." I hope this satisfies him. 

And now a few miscellaneous examples by me, culled from a second wild binge 
of self -referential sentence-writing I engaged in not long ago. The first three involve 
translation issues. 

Self-Referential Sentences: A Follow-Up 


One me has translated at the foot of the letter of the French. 
Would not be anomalous if were in Italian. 

When one this sentence into the German to translate wanted, would one the fact exploit, 
that the word order and the punctuation already with the German conventions agree. 

How come this noun phrase doesn't denote the same thing as this noun phrase does? 

Every last word in this sentence is a grotesque misspelling of "towmatow". 

I don't care who wrote this sentence- whoever he is, he's a damn sexist! 

This analogy is like lifting yourself by your own bootstraps. 

Although this sentence begins with the word "because", it is false. 

Despite the fact that it opens like a two-pronged pitchfork-or rather, because of it-this 
sentence resembles a double-edged sword. 

This line from Shakespeare has delusions of grandeur. 

If writers were bakers, this sentence would be exactly a dozen words long. 

If this sentence had been on the previous page, this very moment would have occurred 
approximately 60 seconds ago. 

This sentence is helping to increase the likelihood of nuclear war by distracting you from 
the more serious concerns of the world and beguiling you with the trivial joys of self- 

This sentence is helping to decrease the likelihood of nuclear war by chiding you for 
indulging in the trivial joys of self-reference and reminding you of the more serious 
concerns of the world. 

We mention "our gigantic nuclear arsenal" in order not to use it. 

The whole point of this sentence is to make clear what the whole point of this sentence is. 

This last one's bizarre circularity reminds me of the number P that I invented a couple of 
years ago. P is, for each individual, the number of 

Self-Referential Sentences: A Follow-Up 


minutes per month that that person spends thinking about the number P. For me, the 
value of P seems to average out at about 2. 1 certainly wouldn't want it to go much above 
that! I find it crosses my mind most often when I'm shaving. 

* * * 

Dr. J. K. Aronson from Oxford, England, sent in some of the most marvelous discoveries. 
Here is one of his best: 

T is the first, fourth, eleventh, sixteenth, twenty-fourth, twenty-ninth, thirty- 

The sentence never ends, of course. He also submitted a wonderful complementary pair 
that faked me out beautifully. His challenge to you is: Try deciphering the first before 
you read the second. 

I eee oai o ooa a e ooi eee o oe. 

Ths sntnc cntns n vwls nd th prcdng sntnc n cnsnnts. 

One that reminds me somewhat of Aronson's last sentence above is the following spoof 
on the ads that I believe you can still find in the New York subway, after all these years: 

f y cn rd ths, itn tyg h myxbl cd. 

By a remarkable coincidence, the remainder of Carl Bender's sentence "The rest of this 
sentence is written in Thailand, on" was discovered in, of all places, Bangkok, Thailand, 
by Gregory Bell, who lives there. He has luckily provided me with a perfect copy of it, so 
for all those who were dying of suspense, it is shown in Figure 2-3. 
One evening during a bad electrical storm, I got the following message on the computer 
from Marsha Meredith: 

I ]ion't be able to work at all tonight b]iecause of the w&atherBr/ I]i'm getting too many 
bad characters (as you can see). loo baw3d-I get spurious characters]i all over ]ithe place- 
talk totrrRBow,lF7U Marsha. 

FIGURE 2-3. The conclusion of Carl Bender's sentence fragment ("The rest of this 
sentence is written in Thailand, on "), discovered by Gregory Bell on a scrap of paper in 
Bangkok, Thailand , Translated it says: : "this sheet of paper and is in Thai". 

n r r mmi uuuua s t fau 1 n ml n tj 

Self-Referential Sentences: A Follow-Up 


I wish she had had the patience to type more carefully, so that I could have understood 

what her problem was. 
The sentences having to do with identity in counterfactual worlds, such as Dan 
Krimm's and its alter egos, reminded me of a blurb by E. O. Wilson I read recently on 
Lewis Thomas' latest book: "If Montaigne had possessed a deep knowledge of twentieth- 
century biology, he would have been Lewis Thomas." Ah me, the flittering elf of self! 
And Banesh Hoffmann, in Relativity and Its Roots, has written: "How safe we would be 
from death by nuclear bomb had we been born in the time of Shakespeare." Sure, except 
we'd also all be long dead-unless, of course, the 24th-century doctors who will invent 
immortality pills had also been born in Shakespeare's time! 
The following self-referential poem just came to me one day: 

Twice five syllables, 

Plus seven, can't say much-but ... 

That's haiku for you. 

The genre of self-referential poetry-including haiku-was actually quite popular. Tom 
McDonald submitted this non-limerick: 

A very sad poet was Jenny 

Her limericks weren't worth a penny. 

In technique they were sound, 

Yet somehow she found 

Whenever she tried to write any, 

That she always wrote one line too many! 

Several people sent in complex poems of various sorts, and mentioned books of them, 
such as John Hollander's Rhyme's Reason, a collection of poems describing their own 

* * * 

Self -referential book titles are enjoying a mild vogue these days. Raymond Smullyan was 
one of the most enthusiastic explorers of the potential of this idea, using the titles What Is 
the Name of This Book! and This Book Needs No Title. Actually, I think Needs No Title 
would have said it more crisply, or maybe just No Title. Come to think of it, why not No, 
or even just plain ? (I hope you could tell that those blanks were in italics !) 
Other self-referential book titles I have collected include these: 

Self-Referential Sentences: A Follow-Up 


Forget all the rules you ever learned about graphic design. Including the ones in 

this book. 
Steal This Book 
Ban This Book 

Deduct This Book (How Not to Pay Taxes While Ronald Reagan Is President) 

Do You Think Mom Would Like This One ? 

Dewey Decimal No. 510.46 FC H3 

I Never Can Remember What It's Called 

The Great American Novel 

ISBN 0-943568-01-3 

Self Referential Book Title 

The Top Book on the New York Times Bestseller List for the Past Ten Weeks 

Don't Go Overseas Until You've Read This Book 

Soon to Become a Major Motion Picture 

By Me, William Shakespeare (by Robert Payne) 

That Book with the Red Cover in Your Window 

Reviews of This Book 

Oh, by the way, some of these are fake, others are real. For example, the last one, 
Reviews of This Book, is just a fantasy of mine. I would love to see a book consisting of 
nothing but a collection of reviews of it that appeared (after its publication, of course) in 
major newspapers and magazines. It sounds paradoxical, but it could be arranged with a 
lot of planning and hard work. First, a group of major journals would all have to agree to 
run reviews of the book by the various contributors to the book. Then all the reviewers 
would begin writing. But they would have to mail off their various drafts to all the other 
reviewers very regularly so that all the reviews could evolve together, and thus eventually 
reach a stable state of a kind known in physics as a "Hartree-Fock self-consistent 
solution". Then the book could be published, after which its reviews would come out in 
their respective journals, as per arrangement. (A little more on this idea is given in the 
postscript to Chapter 16.) 

* * * 

I chanced across two books devoted to the subject of indexing books. 
They are: A Theory of Indexing (by Gerald Salton) and Typescripts, Proofs, and Indexes 
(by Judith Butcher). Amazingly, neither one has an index. I also received a curious letter 
soliciting funds, which began this way: "Dear Friend: In these last months, I've been 
making a study of the money-raising letter as an art form ..." I didn't read any further. 
Aldo Spinelli, an Italian artist and writer, sent me some of his products. One, a short book 
called Loopings, has pages documenting their own word 

Self-Referential Sentences: A Follow-Up 


and letter counts in various complex ways, and includes at the end a short essay on 
various ways in which documents can tally themselves up or can mutually tally each 
other in twisty loops. Another, called Chisel Book, documents its own production, 
beginning with the idea, going through the finding of a publisher, making the layout, 
designing the cover, printing it, and so on. 

Ashleigh Brilliant is the inventor of a vast number of aphorisms he calls 
"potshots", many of which have become very popular phrases in this country. For some 
reason, he has a self-imposed limit of seventeen words per potshot. A few typical 
potshots (all taken from his four books listed in the Bibliography) are: 

What would life be, without me? 

As long as I have you, I can endure all the troubles you inevitably bring. 
Remember me? I'm the one who never made any impression on you. 
Why does trouble always come at the wrong time? 

Due to circumstances beyond my control, I am master of my fate and captain of my 

Although strictly speaking these are not self-referential sentences, they are all admirable 
examples of how the world constantly tangles with itself in multifarious self-undermining 
ways, and as such, they definitely belong in this chapter. As a matter of fact, I would like 
to take this occasion to announce that Ashleigh Brilliant is the 1984 recipient of the last 
annual Nobaloney Prize for Aphoristic Eloquence. The traditional Nobaloney ceremony, 
involving the awarding of a $1,000,000 cash prize two minutes before the recipient's 
decapitation, has been waived, at Mr. Brilliant's request. 

There are other books containing much of interest to the self-reference addict. I 
would particularly recommend the recent More on Oxymoron, by Patrick Hughes, as well 
as the earlier Vicious Circles and Infinity, by Hughes and George Brecht. Also in this 
category are three thin volumes on Murphy's Law, compiled by Arthur Bloch. Murphy's 
Law, of course, is the one that says, "If anything can go wrong, it will", although when I 
first heard of it, it was called the "Fourth Law of Thermodynamics". O'Toole's 
Commentary on Murphy's Law is: "Murphy was an optimist." Goldberg's Commentary 
thereupon is: "O'Toole was an optimist." And finally, there is Schnatterly's Summing Up: 
"If anything can't go wrong, it will." 

My own law, "Hofstadter's Law", states: "It always takes longer than you think it 
will take, even if you take into account Hofstadter's Law." Despite being its enunciator, I 
never seem to be able to take it fully into account in 

Self-Referential Sentences: A Follow-Up 


budgeting my own time. To help me out, therefore, my friend Don Byrd came up with his 
own law that I have taken to heart: 

Byrd's Law: 

It always takes longer than you think it will take, even if you take into account 
Hofstadter's Law. 

Unfortunately, Byrd himself seems unable to take this law into account. 

Self-Referential Sentences: A Follow-Up 



On Viral Sentences and 
Self-Replicating Structures 

January, 1983 

TwO years ago, when I first wrote about self-referential sentences, I was hit by an 
avalanche of mail from readers intrigued by the phenomenon of self-reference in its many 
different guises. I had the chance to print some of those responses one year ago, and that 
column then triggered a second wave of replies. Many of them have cast self-reference in 
new light of various sorts. In this column, I would like to describe the ideas of several 
people, two of whom responded to my initial column with remarkably similar letters: 
Stephen Walton of New York City and Donald R. Going of Oxon Hill, Maryland. 

Walton and Going saw self-replicating sentences as similar to virusessmall 
objects that enslave larger and more self-sufficient "host" objects, getting the hosts by 
hook or by crook to carry out a complex sequence of replicating operations that bring 
new copies into being, which are then free to go off and enslave further hosts, and so on. 
"Viral sentences", as Walton called them, are "those that seek to obtain their own 
reproduction by commandeering the facilities of more complex entities". 

Both Walton and Going were struck by the perniciousness of such sentences: the 
selfish way in which they invade a space of ideas and, merely by making copies of 
themselves all over the place, manage to take over a large portion of that space. Why do 
they not manage to overrun all of that idea-space? A good question. The answer should 
be obvious to students of evolution: competition from other self -replicators. One type of 
replicator seizes a region of the space and becomes good at fending off rivals; thus a 
"niche" in idea-space is carved out. 

This idea of an evolutionary struggle for survival by self-replicating ideas is not 
original with Walton or Going, although both had fresh things to say on it. The first 
reference I know of to this notion is in a passage by neurophysiologist Roger Sperry in an 
article he wrote in 1965 called "Mind, Brain, and Humanist Values". He says: "Ideas 
cause ideas and help evolve 

On Viral Sentences and Self-Replicating Structures 


new ideas. They interact with each other and with other mental forces in the same brain, 
in neighboring brains, and, thanks to global communication, in far distant, foreign brains. 
And they also interact with the external surroundings to produce in toto a burstwise 
advance in evolution that is far beyond anything to hit the evolutionary scene yet, 
including the emergence -of the living cell." 

Shortly thereafter, in 1970, the molecular biologist Jacques Monod came out with 
his richly stimulating and provocative, book Chance and Necessity. In its last chapter, 
"The Kingdom and the Darkness", he wrote of the selection of ideas as follows: 

For a biologist it is tempting to draw a parallel between the evolution of ideas and 
that of the biosphere. For while the abstract kingdom stands at a yet greater distance 
above the biosphere than the latter does above the nonliving universe, ideas have 
retained some of the properties of organisms. Like them, they tend to perpetuate 
their structure and to breed; they too can fuse, recombine, segregate their content; 
indeed they too can evolve, and in this evolution selection must surely play an 
important role. I shall not hazard a theory of the selection of ideas. But one may at 
least try to define some of the principal factors involved in it. This selection must 
necessarily operate at two levels: that of the mind itself and that of performance. 

The performance value of an idea depends upon the change it brings to the 
behavior of the person or the group that adopts it. The human group upon which a 
given idea confers greater cohesiveness, greater ambition, and greater self- 
confidence thereby receives from it an added power to expand which will insure the 
promotion of the idea itself. Its capacity to "take", the extent to which it can be "put 
over" has little to do with the amount of objective truth the idea may contain. The 
important thing about the stout armature a religious ideology constitutes for a 
society is not what goes into its structure, but the fact that this structure is accepted, 
that it gains sway. So one cannot well separate such an idea's power to spread from 
its power to perform. 

The "spreading power"-the infectivity, as it were-of ideas, is much more 
difficult to analyze. Let us say that it depends upon preexisting structures in the 
mind, among them ideas already implanted by culture, but also undoubtedly upon 
certain innate structures which we are hard put to identify. What is very plain, 
however, is that the ideas having the highest invading potential are those that 
explain man by assigning him his place in an immanent destiny, in whose bosom 
his anxiety dissolves. 

Monod refers to the universe of ideas, or what I earlier termed "idea-space", as 
"the abstract kingdom". Since he portrays it as a close analogue to the biosphere, we 
could as well call it the "ideosphere". 

* * * 

In 1976, evolutionary biologist Richard Dawkins published his book The Selfish 
Gene, whose last chapter develops this theme further. Dawkins' name 

On Viral Sentences and Self-Replicating Structures 


for the unit of replication and selection in the ideo sphere— the ideosphere's counterpart to 
the biosphere's gene-is meme, rhyming with "theme" or "scheme". As a library is an 
organized collection of books, so a memory is an organized collection of memes. And the 
soup in which memes grow and flourish-the analogue to the "primordial soup" out of 
which life first oozed-is the soup of human culture. Dawkins writes: 

Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of 
making pots or of building arches. Just as genes propagate themselves in the gene 
pool by leaping from body to body via sperms or eggs, so memes propagate 
themselves in the meme pool by leaping from brain to brain via a process which, in 
the broad sense, can be called imitation. If a scientist hears, or reads about, a good 
idea, he passes it on to his colleagues and students. He mentions it in his articles 
and his lectures. If the idea catches on, it can be said to propagate itself, spreading 
from brain to brain. As my colleague N. K. Humphrey neatly summed up an earlier 
draft of this chapter: ' . . . memes should be regarded as living structures, not just 
metaphorically but technically. When you plant a fertile meme in my mind you 
literally parasitize my brain, turning it into a vehicle for the meme's propagation in 
just the way that a virus may parasitize the genetic mechanism of a host cell. And 
this isn't just a way of talking-the meme for, say, "belief in life after death' is 
actually realized physically, millions of times over, as a structure in the nervous 
systems of individual men the world over.' 

Consider the idea of God. We do not know how it arose in the meme pool. 
Probably it originated many times by independent "mutation'. In any case, it is very 
old indeed. How does it replicate itself? By the spoken and written word, aided by 
great music and great art. Why does it have such high survival value? Remember 
that "survival value' here does not mean value for a gene in a gene pool, but value 
for a meme in a meme pool. The question really means: What is it about the idea of 
a god which gives it its stability and penetrance in the cultural environment? The 
survival value of the god meme in the meme pool results from its great 
psychological appeal. It provides a superficially plausible answer to deep and 
troubling questions about existence. It suggests that injustices in this world may be 
rectified in the next. The 'everlasting arms' hold out a cushion against our own 
inadequacies which, like a doctor's placebo, is none the less effective for being 
imaginary. These are some of the reasons why the idea of God is copied so readily 
by successive generations of individual brains. God exists, if only in the form of a 
meme with high survival value, or infective power, in the environment provided by 
human culture. 

Dawkins takes care here to emphasize that there need not be an exact copy of 
each meme, written in some universal memetic code, in each person's brain. Memes, like 
genes, are susceptible to variation or distortion-the analogue to mutation. Various 
mutations of a meme will have to compete with each other, as well as with other memes, 
for attention- which is to say, for brain resources in terms of both space and time devoted 
to that meme. Not only must memes compete for inner resources, but, since they are 

On Viral Sentences and Self-Replicating Structures 


transmissible visually and aurally, they must 'compete for radio and television time, 
billboard space, newspaper and magazine column-inches, and library shelf-space. 
Furthermore, some memes will tend to discredit others, while some groups of memes will 
tend to be internally self-reinforcing. Dawkins says: 

... Mutually suitable teeth, claws, guts, and sense organs evolved in 
carnivore gene pools, while a different stable set of characteristics emerged from 
herbivore gene pools. Does anything analogous occur in meme pools? Has the god 
meme, say, become associated with any other particular memes, and does this 
association assist the survival of each of the participating memes? Perhaps we 
could regard an organized church, with its architecture, rituals, laws, music, art, and 
written tradition, as a co-adapted stable set of mutually-assisting memes. 

To take a particular example, an aspect of doctrine which has been very 
effective in enforcing religious observance is the threat of hell fire. Many children 
and even some adults believe that they will suffer ghastly torments after death if 
they do not obey the priestly rules. This is a particularly nasty technique of 
persuasion, causing great psychological anguish throughout the middle ages and 
even today. But it is highly effective. It might almost have been planned 
deliberately by a machiavellian priesthood trained in deep psychological 
indoctrination techniques. However, I doubt if the priests were that clever. Much 
more probably, unconscious memes have ensured their own survival value by 
virtue of those same qualities of pseudo-ruthlessness which successful genes 
display. The idea of hell fire is, quite simply, self-perpetuating, because of its own 
deep psychological impact. It has become linked with the god meme because the 
two reinforce each other, and assist each other's survival in the meme pool. 

Another member of the religious meme complex is called faith. It means 
blind trust, in the absence of evidence, even in the teeth of evidence .... Nothing is 
more lethal for certain kinds of meme than a tendency to look for evidence .... The 
meme for blind faith secures its own perpetuation by the simple unconscious 
expedient of discouraging rational inquiry. 

Blind faith can justify anything. If a man believes in a different god, or even 
if he uses a different ritual for worshipping the same god, blind faith can decree that 
he should die-on the cross, at the stake, skewered on a Crusader's sword, shot in a 
Beirut street, or blown up in a bar in Belfast. Memes for blind faith have their own 
ruthless ways of propagating themselves. This is true of patriotic and political as 
well as religious blind faith. 

* * * 

When I muse about memes, I often find myself picturing an ephemeral flickering 
pattern of sparks leaping from brain to brain, screaming "Me, me!" Walton's and Going's 
letters reinforced this image in interesting ways. For instance, Walton begins with the 
simplest imaginable viral sentences "Say me!" and "Copy me! "-and moves quickly to 
more complex variations with blandishments ("If you copy me, I'll grant you three 
wishes!") or 

On Viral Sentences and Self-Replicating Structures 


threats ("Say me or I'll put a curse on you!"), neither of which, he observes, is likely to be 
able to keep its word. Of course, as he points out, this may not matter, the only final test 
of viability being success at survival in the meme pool. All's fair in love and war-and war 
includes the eternal battle for survival, in the ideosphere no less than in the biosphere. 

To be sure, very few people above the age of five will fall for the simple-minded 
threats or promises of these sentences. However, if you simply tack on the phrase "in the 
afterlife", far more people will be lured into the memetic trap. Walton observes that a 
similar gimmick is used by your typical chain letter (or "viral text"), which "promises 
wealth to those who faithfully replicate it and threatens doom to any who fail to copy it". 
Do you remember the first time you received such a chain letter? Do you recall the sad 
tale of "Don Elliot, who received $50,000 but then lost it because he broke the chain"? 
And the grim tale of "General Welch in the Philippines, who lost his life [or was it his 
wife?] six days after he received this letter because he failed to circulate the prayer-but 
before he died, he received $775,000"? Poor Don Elliot! Poor General Welch! It's hard 
not to be just a little sucked in by such tales, even if you wind up throwing the letter out 

I found Walton's phrases "viral sentence" and "viral text" to be exceedingly 
catchy-little memes in themselves, definitely worthy of replication some 700,000 times in 
print, and who knows how many times orally beyond that. At least that's my opinion. Of 
course, it also depends on how the editor of Scientific American feels. [It turned out he 
felt fine about it.] Well, now, Walton's own viral text, as you can see here before your 
eyes, has managed to commandeer the facilities of a very powerful host- an entire 
magazine and printing press and distribution service. It has leapt aboard and is now-even 
as you read this viral sentence-propagating itself madly throughout the ideosphere! 

This idea of choosing the right host is itself an important aspect of the quality of a 
viral entity. Walton puts it this way: 

The recipient of a viral text can, of course, make a big difference. A tobacco mosaic 
virus that attacks a salt crystal is out of luck, and some people rip up chain letters 
on sight. A manuscript sent to an editor may be considered viral, even though it 
contains no explicit self-reference, because it is attempting to secure its own 
reproduction through an appropriate host; the same manuscript sent to someone 
who has nothing to do with publishing may have no viral quality at all. 

As it concludes, Walton's letter graciously steps forward from the page and 
squeaks to me directly on its own behalf: "Finally, I (this text) would be delighted to be 
included, in whole or in part, in your next discussion of self -reference. With that in mind, 
please allow me to apologize in advance for infecting you." 

* * * 

On Viral Sentences and Self-Replicating Structures 


Whereas Walton mentioned Dawkins in his letter, Going seems not to have been 
aware of Dawkins at all, which makes his letter quite remarkable in its close connection 
to Dawkins' ideas. Going suggests that we consider, to begin with, Sentence A: 

It is your duty to convince others that this sentence is true. 

As he says: 

If you were foolish enough to believe this sentence, you would attempt to convince 
your friends that A is true. If they were equally foolish, they would convince their 
friends, and so on until every human mind contained a copy of A. Thus, A is a self- 
replicating sentence. More particularly, it is the intellectual equivalent of a virus. If 
Sentence A were to enter a mind, it would take control of the mind's intellectual 
machinery and use it to produce hundreds of copies of itself in other minds. 

The problem with Sentence A, of course, is that it is absurd; no one could 
possibly believe it. However, consider the following: 

System S: 

SI: Blah. 
S2: Blah blah. 
S3: Blah blah blah. 

S99: Blah blah blah blah blah blab 

S100: It is your duty to convince others that System S is true. 


Here, SI through S99 are meant to be statements that constitute a belief system 
having some degree of coherency. If System S taken as a whole were convincing, 
then the entire system would be self-replicating. System S would be especially 
convincing if 5100 were not stated explicitly but held as a logical consequence of 
the other ideas in the system. 

Let us refer to Going's S100 as the hook of System S, for it is by this hook that 
System S hopes to hoist itself onto a higher level of power. Note that on its own, a hook 
that in effect says "It is your duty to believe me" is not a viable viral entity; in order to 
"fly", it needs to drag something extra along with it, just as a kite needs a tail to stabilize 
it. Pure lift goes out of control and self-destructs, but controlled lift can lift itself along 
with its controller. Similarly, 5100 and SI-S99 (taken as a set) are symbiotes: they play 

On Viral Sentences and Self-Replicating Structures 


complementary, mutually supportive roles in the survival of the meme they together 
constitute. Now Going develops this theme a little further: 

Statements S,-S99 are the bait which attracts the fish and conceals the hook. No 
bait-no bite. If the fish is fool enough to swallow the baited hook, it will have little 
enough time to enjoy the bait. Once the hook takes hold, the fish will lose all its 
fishiness and become instead a busy factory for the manufacture of baited hooks. 

Are there any real idea systems that behave like System S? I know of at least three. 
Consider the following: 

System X: 


XI: Anyone who does not believe System X will burn in hell, 
X2: It is your duty to save others from suffering. 


If you believed in System X, you would attempt to save others from hell by 
convincing them that System X is true. Thus System X has an implicit v hook' that 
follows from its two explicit sentences, and so System X is a self-replicating idea 
system. Without being impious, one may suggest that this mechanism has played 
some small role in the spread of Christianity. 

Self-replicating ideas are most often found in politics. Consider Sentence IV. 

The whales are in danger of extinction. 

If you believed this idea, you would want to save the whales. You would quickly 
discover that you could not reach this goal by yourself. You would need the help of 
thousands of like-minded people. The first step in getting their help would be to 
convince them that Sentence il' is true. Thus a'hook like 5100 follows from 
Sentence II, and Sentence IV is a self-replicating idea. 

In a democracy, nearly any idea will tend to replicate since the only way to win an 
election is to convince other people to share your ideas. Most political ideas are not 
properly self-replicating, since the motive for spreading the idea is separate from 
the idea itself. Statement IV, on the other hand, is genuinely self-replicating, since 
the duty to propagate it is a direct logical consequence of IV itself. Ideas like W can 
sometimes take on a life of their own and drive their own propagation. 
A more sinister form of self -replication is Sentence B: 

The bourgeoisie is oppressing the proletariat. 

This statement is self-replicating for the same reason as W is. The desire to 
propagate statements like B is driven by a desire to protect a victim figure from a 
villain figure. Such ideas are dangerous because belief in them may lead to attacks 
on the supposed villain. Statement B also illustrates the fact 

On Viral Sentences and Self-Replicating Structures 


that the self-replicating character of an idea depends only upon the idea's logical 
structure, not upon its truth. 

Statement B is merely a special case of the generalized statement, Sentence V: 

The villain is wronging the victim. 

Here, the word villain must be replaced with the name of some real group 
(capitalists, communists, imperialists, Jews, freemasons, aristocrats, men, 
foreigners, etc.). Likewise, victim must be replaced with the name of the 
corresponding victim and wronging filled in as desired. The result will be a self- 
replicating idea system for the same reasons as IV and B were. Note that each of the 
suggested substitutions yields a historically attested idea system. It has long been 
recognized that most extremist mass movements are based on a belief similar to V. 
Part of the reason seems to be that type-1' statements reduce to the 'hook', S100, and 
therefore define self -replicating idea systems. One hesitates to explain real 
historical events in terms of such a silly mechanism, and yet .... 

Going brings his ideas to an amusing conclusion as follows: 

Suppose we parody my thesis by proposing Sentence E: 

The self-replicating ideas are conspiring to enslave our minds. 

This 'paranoid' statement is clearly an idea of type 1'. Thus, the thesis seems to 
describe itself. Further, if we accept E, then we must say that this type-V idea 
implies that we must distrust all ideas of type P. This is the Epimenides Paradox. 

It is interesting that all these people who have explored these ideas have given 
examples ranging from the very small scale of such things as catchy tunes (for example, 
Dawkins cites the opening theme of Beethoven's fifth symphony) and phrases (the word 
"meme" itself) to the very large scale of ideologies and religions. Dawkins uses the term 
meme complex for these larger agglomerations of memes; however, I prefer the single 
word scheme. 

One reason I prefer it is that it fits so well with the usage suggested by psychiatrist 
and writer Allen Wheelis in his novel The Scheme of Things. Its central character is a 
psychiatrist and writer named Oliver Thompson, whose darkly brooding essays are 
scattered throughout the book, interspersed with brightly colored, evocative episodes. 
Thompson is obsessed with the difference between, on the one hand, "the raw nature of 
existence, unadorned, unmediated", which he refers to repeatedly as "the way things are", 
and, on the other hand, "schemes of things", invented by ' humans- ways of making order 
and sense out of the way things are. Here are some of Thompson's musings on that 

On Viral Sentences and Self-Replicating Structures 


I want to write a book .... the story of one man whose life becomes a metaphor for the 
entire experience of man on earth. It will portray his search through a succession of 
schemes of things, show the breakdown, one after another, of each pattern he finds, his 
going on always to another, always in the hope that the scheme of things he finds and 
for the moment is serving is not a scheme of things at all but reality, the way things 
are, therefore an absolute that will endure forever, within which he can serve, to which 
he can contribute, and through, which he can give his mortal life meaning and so 
achieve eternal life.... 

The scheme of things is a system of order. Beginning as our view of the world, 
it finally becomes our world. We live within the space defined by its coordinates. It is 
self-evidently true, is accepted so naturally and automatically that one is not aware of 
an act of acceptance having taken place. It comes with one's mother's milk, is chanted 
in school, proclaimed from the White House, insinuated by television, validated at 
Harvard. Like the air we breathe, the scheme of things disappears, becomes simply 
reality, the way things are. It is the lie necessary to life. The world as it exists beyond 
that scheme becomes vague, irrelevant, largely unperceived, finally nonexistent .... 

No scheme of things has ever been both coextensive with the way things are 
and also true to the way things are. All schemes of things involve limitation and denial 

A scheme of things is a plan for salvation. How well it works will depend upon 
its scope and authority. If it is small, even great achievement in its service does little to 
dispel death. A scheme of things may be as large as Christianity or as small as the 
Alameda County Bowling League. We seek the largest possible scheme of things, not 
in a reaching out for truth, but because the more comprehensive the scheme the greater 
its promise of banishing dread. If we can make our lives mean something in a cosmic 
scheme we will live in the certainty of immortality. Those attributes of a scheme of 
things that determine its durability and success are its scope, the opportunity it offers 
for participation and contribution, and the conviction with which it is held as self- 
evidently true. The very great success of Christianity for a thousand years follows 
upon its having been of universal scope, including and accounting for everything, 
assigning to all things a proper place; offering to every man, whether prince or beggar, 
savant or fool, the privilege of working in the Lord's vineyard; and being accepted as 
true throughout the Western world. 

As a scheme of things is modified by inroads from outlying existence, it loses 
authority, is less able to banish dread; its adherents fall away. Eventually it fades, 
exists only in history, becomes quaint or primitive, becomes, finally, a myth. What we 
know as legends were once blueprints of reality. The Church was right to, stop 
Galileo; activities such as his import into the regnant scheme of things new being 
which will eventually destroy that scheme. 

Taken in Wheelis' way, "scheme" seems a fitting replacement for Dawkins' 
"meme complex". A scheme imposes a top-down kind of perceptual order on the world, 
propagating itself ruthlessly, like Going's System S with its "hook". Wheelis' description 
of the inadequacy of all "schemes of things" to fully and accurately capture "the way 
things are" is strongly reminiscent of the vulnerability of all sufficiently powerful formal 

On Viral Sentences and Self-Replicating Structures 


systems to either incompleteness or inconsistency-a vulnerability that ensues from 
another kind of "hook": the famous Godelian hook, which arises from the capacity for 
self-reference of such systems, although neither Wheelis nor Thompson makes any 
mention of the analogy. We shall come back to Godel momentarily. 

* * * 

The reader of this novel must be struck by the professional similarity of Wheelis 
and his protagonist. It is impossible to read the book and not to surmise that Thompson's 
views are reflecting Wheelis' own views-and yet, who can say? It is a tease. Even more 
tantalizing is the title of Thompson's imaginary book, which Wheelis casually mentions 
toward the end of the novel: it is The Way Things Are-a striking contrast to the title of 
the real book in which it exists. One wonders: What is the meaning of this elegant literary 
pleat in which one level folds back on another? What is the symbolism of Wheelis within 

Such a twist, by which a thing (sentence, book, system, person) seems to refer to 
itself but does so only by allusion to something resembling itself, is called indirect self- 
reference. You can do this by pointing at your image in a mirror and saying, "That person 
sure is good-looking!" That one is very simple, because the connection between 
something and its mirror image is so familiar and obvious-seeming to us that there seems 
to be no distance whatsoever between direct and indirect referents: we equate them 
completely. Thus it seems there is no referential indirectness. 

On the other hand, this depends upon the ease with which our perceptual systems 
convert a mirror image into its reverse, and upon other qualities of our cognitive systems 
that allow us to see through several layers of translation without being aware of the 
layers-like looking through many feet of water and seeing not the water but only what 
lies at its bottom. 

Some indirect self-references are of course subtler than others. Consider the case 
of Matt and Libby, a couple ostensibly having a conversation about their friends Tammy 
and Bill. It happens that Matt and Libby are having some problems in their relationship, 
and those problems are quite analogous to those of Tammy and Bill, only with sexes 
reversed: Matt is to Libby what Tammy is to Bill, in their respective relationships. So as 
Matt and Libby's conversation progresses, although on the surface level it is completely 
about their friends Tammy and Bill, on another level it is actually about themselves, as 
reflected in these other people. It is almost as if, by talking about Tammy and Bill, Matt 
and Libby are going over a fable by Aesop that has obvious relevance to their own plight. 
There are things going on simultaneously on two levels, and it is hard to tell how 
conscious either of the participants is of the exchange of dual messages-one of concern 
about their friends, one of concern about themselves. 

On Viral Sentences and Self-Replicating Structures 


Indirect self-reference can be exploited in the most unexpected and serious ways. 
Consider the case of President Reagan, who on a recent occasion of high Soviet- 
American tension over Iran, went out of his way to recall President Truman's behavior in 
1945, when Truman made some very blunt threats to the Soviets about the possibility of 
the U.S. using nuclear weapons if need be against any Soviet threat in Iran. Merely by 
bringing up the memory of that occasion, Reagan was inviting a mapping to be made 
between himself and Truman, and thereby he was issuing a not-so-veiled threat, though 
no one could point to anything explicit. There simply was no way that a conscious being 
could fail to make the connection. The resemblance of the two situations was too blatant. 

Thus, does self-reference really come in two varieties-direct and indirect -or are 
the two types just distant points on a continuum? I would say unhesitatingly that it is the 
latter. And furthermore, you can delete the prefix "self ", so that the question becomes 
one of reference in general. The essence is simply that one thing refers to another 
whenever, to a conscious being, there is a sufficiently compelling mapping between the 
roles the two things are perceived to play in some larger structures or systems. (See 
Chapter 24 for further discussion of the perception of such roles.) Caution is needed here. 
By "conscious being", I mean an analogy-hungry perceiving machine that gets along in 
the world thanks to its perceptions; it need not be human or even organic. Actually, I 
would carry the abstraction of the term "reference" even further, as follows. The mapping 
of systems and roles that establishes reference need not actually be perceived by any such 
being: it suffices that the mapping exist and simply be perceptible to such a being were it 
to chance by. 

* * * 

The movie The French Lieutenant's Woman (based on John Fowles' novel of the 
same name) provides an elegant example of ambiguous degrees of reference. It consists 
of interlaced vignettes from two concurrently developing stories both of which involve 
complex romances; one takes place in Victorian England, the other in the present. The 
fact that there are two romances already suggests, even if only slightly, that a mapping is 
called for. But much more is suggested than that. There are structural similarities between 
the two romances: each of them has triangular qualities, and in both stories, only one leg 
of the triangle is focused upon. Moreover, the same two actors play the two lovers in both 
romances, so that you see them in alternating contexts and with alternating personality 
traits. The reason for this "coincidence" is that the contemporary story concerns the 
making of a film of the Victorian story. 

As the two stories unfold in parallel, a number of coincidences arise that suggest 
ever more strongly that a mapping should be made. But it is left to the movie viewer to 
carry this mapping out; it is never called for explicitly. 

On Viral Sentences and Self-Replicating Structures 


After a time, though, it simply becomes unavoidable. What is pleasant in this game is the 
fluidity left to the viewer: there is much room for artistic license in seeing connections, or 
suspecting or even inventing connections. 

Indirect reference of the artistic type is much less precise than indirect reference 
of the formal type. The latter arises when two formal systems are isomorphic-that is, they 
have strictly analogous internal structures, so that there is a rigorous one-to-one mapping 
between the roles in the one and the roles in the other. In such a case, the existence of 
genuine reference becomes as clear to us as in the case of someone talking about their 
mirror image: we take it as immediate, pure self-reference, without even noticing the 
indirectness, the translational steps mediated by the isomorphism. In fact, the connection 
may seem too direct even to be called "reference"; some may see it simply as identity. 

This perceptual immediacy is the reason that Godel's famous sentence G of 
mathematical logic is said to be self-referential. Everyone accepts the idea that G talks 
about a number, g (though a radical skeptic might question even that!); the tricky 
Godelian step is in seeing that g (the number) plays a role in the system of natural 
numbers strictly analogous to the role that G (the sentence) plays in the axiomatic system 
it is expressed in. This Wheelis-like oblique reference by G to itself via its "image" g is 
generally accepted as genuine self-reference. (Note that we have even one further 
mapping: G plays the role of Wheelis, and its Godel number g that of Wheelis' alter ego 

The two abstract mappings that, when telescoped, establish Gs self -reference but 
make it seem indirect can be collapsed into just one mapping, following a slogan that we 
might formulate this way: "If A refers to B, and B is just like C, then A refers to C." For 
instance, we can let A and C be Wheelis, with B being Thompson. This makes Wheelis' 
self-reference a "theorem". Of course, this "theorem" is not rigorously proven, since our 
slogan has to be taken with a grain of salt. Being v just like" something else is a highly 
disputable matter. 

However, in a formal context where is jurt like is virtually synonymous with plays 
a role isomorphic to that of, then the slogan can have a strict meaning, and thereby justify 
a theorem more rigorously. In particular, if A and C are equated with G, and B with g, 
then our slogan runs: "If G refers to g, and g plays a role isomorphic to that of G, then G 
refers to G." Since the premises are true, the conclusion must be true. According to this 
scheme of things, then, G is a genuinely self -referential sentence, rather than some sort of 
logical illusion as deceptive as an Escher print. 

* * * 

Indirect self-reference suggests the idea of indirect self-replication, in which a 
viral entity, instead of replicating itself exactly, brings into being another entity that plays 
the same role as it does, but in some other system: perhaps 

On Viral Sentences and Self-Replicating Structures 


its mirror image, perhaps its translation into French, perhaps a string of the product 
numbers of all its parts, together with pre- addressed envelopes containing checks made 
out to the factories where those parts are made, and a list of instructions telling what to do 
with all the parts when they arrive in the in :1. 

This may sound familiar to some readers. In fact, it is an indirect reference to the 
Von Neumann Challenge, the puzzle posed in Chapter 2 to create a self-describing 
sentence whose only quoted matter is at the word or letter level, rather than at the level of 
whole quoted phrases. I discovered, as I received candidate solutions, that many readers 
did not understand what this requirement meant. The challenge came out of an objection 
to the complexity of the "seed" (the quoted part) in Quine's version of the Epimenides 

"yields falsehood when appended to its quotation." yields falsehood when 
appended to its quotation. 

To see what is strange here, imagine that you wish to have a space-roving robot 
build a copy of itself out of raw materials that it encounters in its travels. Here is one way 
you could do it: Make the robot symmetrical, like a human being. Also make the robot 
able to make a mirror-image copy of any structure that it encounters along its way. 
Finally, have the robot be programmed to scan the world constantly, the way a hawk 
scans the ground for rodents. The search image in the robot's case is that of an object 
identical to its own left half. The robot need not be aware that its target is identical to its 
left half; the search can go on merrily for what seems to it to be merely a very complex 
and arbitrary structure. When, after scouring the universe for seventeen googolplex years, 
it finally comes across such a structure, then of course the robot activates its mirror- 
image-production facility and creates a right half. The last step is to fasten the two halves 
together, and presto! A copy emerges. Easy as pie-provided you're willing to wait 
seventeen googolplex years (give or take a few minutes). 

The arbitrary and peculiar aspect of the Quine sentence, then, is that its seed is 
half as complex-which is to say, nearly as complex-as the sentence itself. If we resume 
our robot parable, what we'd ideally like in a self-replicating robot is the ability to make 
itself literally from the ground up: let us say, for instance, to mine iron ore, to smelt it, to 
cast it in molds to make nuts and bolts and sheet metal and so on; and finally, to be able 
to assemble the small parts into larger and larger subunits until, miraculously, a replica is 
born out of truly raw materials. This was the spirit of the Von Neumann Challenge: I 
wanted a linguistic counterpart to this "self-replicating robot of the second kind". 

In particular, this means a self-documenting or self -building sentence that builds 
both its halves-its quoted seed and its unquoted building rule-out of linguistic raw 
materials (words or letters). Many readers failed to 

On Viral Sentences and Self-Replicating Structures 


understand what this implies. The most common mistake was to present, as the seed, a 
long sequence of individually quoted words (or letters) in a specific order, then to exploit 
that order in the building rule. Well then, you might as well have quoted one big long 
ordered string, as Quine did. The idea of my challenge was that all structure in the built 
object must arise exclusively out of some principle enunciated in the building rule, not 
out of the seed's internal structure. 

Just as a self-replicating robot in some random alien environment is hardly likely 
to find all its parts lined up on a shelf in order of assembly but must rely on its "brain" or 
program to recognize raw parts wherever and whenever they turn up so that it can grab 
them and therefrom assemble a copy of itself, so the desired sentence must treat the 
pieces of the seed without regard to the order in which they are listed, yet must be able to 
construct itself in the proper order out of them. Thus it's fine if you enclose the entire 
seed within a single pair of quotes, rather than quoting each word individually-all that 
matters is that the seed's word order (or better yet, its letter order) not be exploited. The 
seed of the ideal solution would be a long inventory of parts, similar to the list of 
ingredients of a recipe-perhaps a list of 50 'e's, then 46 Ts, and so on. Clearly those letters 
cannot remain in that order; they simply constitute the raw materials out of which the 
new sentence is to be built. 

* * * 

Nobody sent in a solution whose seed was at the primordial level of letters. A few 
people, however, did send in adequate, if not wonderfully elegant, solutions with seeds at 
the word level. The first correct solution I received came from Frank Palmer of Chicago, 
who therefore receives the first 'Johnnie" award-a self -replicating dollar bill given to the 
Grand Winner of the First Every- Other-Decade Von Neumann Challenge. Unfortunately, 
the dollar bill consumes the entire body of its owner in its bizarre process of self- 
replication, and so it is wisest to simply lock it up to protect oneself from its voracious 

Palmer submitted several versions. In them, he utilized upper and lower cases to 
distinguish between seed and building rule, respectively. Here is one solution, slightly 
modified by me: 

after alphabetizing, decapitalize FOR AFTER WORDS STRING FINALLY 
SUBSTITUTING ALPHABETIZING, finally for nonvocalic string substituting 
unordered uppercase words 

Let us watch how it works, step by careful step. We must bear in mind that the 
instructions we are following are the lowercase words printed above, and that the 
uppercase words are not to be read as instructions. Nor, for that 

On Viral Sentences and Self-Replicating Structures 


matter, are the lowercase words that we will soon be working with. They are like the 
inert, anesthetized body of a patient being operated on, who, when the operation is over, 
will awake and become animate. So let's go. First we are to alphabetize the seed. (I am 
treating the comma as attached to the word preceding it.) This gives us the following: 


Next we are to decapitalize it. This will yield some lowercase words-the "anesthetized" 
lowercase words I spoke of above: 

after alphabetizing, decapitalize fgpbvkxqjz finally for nonvocalic string 
substituting unordered uppercase words 

All right; now our final instruction is to locate a nonvocalic string (that's easy: v 
fgpbvkxgjz ") and to substitute for it the uppercase words, in any order (that is, the 
original seed itself, but without regard for its structure above the level of the individual 
word-unit). This last bit of surgery yields: 

after alphabetizing, decapitalize SUBSTITUTING FINALLY WORDS 
AFTER FGPBVKXQJZ ALPHABETIZING, finally for nonvocalic string 
substituting unordered uppercase words 

And this is a perfect copy of our starting sentence! Or rather, semiperfect. Why only 
semiperfect? Because the seed has been randomly scrambled in the act of self- 
reproduction. The beauty of the scheme, though, is that the internal structure of the seed 
is entirely irrelevant to the efficacy of the sentence as a self -replicator. All that matters is 
that the new building rule say the proper thing, and it will do so no matter what order the 
seed from which it sprang was in. Now this fresh new baby sentence can wake up from 
its anesthesia and go off to replicate itself in turn. 

The critical step was the first one: alphabetization. This turns the arbitrarily- 
ordered seed into a grammatical, meaningful command-merely by mechanically 
exploiting a presumed knowledge of the "ABC's. But why not? It is perfectly reasonable 
to presume superficial typographical knowledge about letters and words, since such 
knowledge deals with printed material as raw material: purely syntactically, without 
regard to the meanings carried therein. This is just like the way that enzymes in the living 
cell deal with the DNA and RNA they chop up and alter and piece together again: purely 
chemically, without regard to the "meanings" carried therein. Just as chemical valences 
and affinities and so on are taken as givens in the workings 

On Viral Sentences and Self-Replicating Structures 


of the cell, so alphabetic and typographic facts are taken as givens in the V. N. Challenge. 

When Palmer sent in his solution, he happened to write down his seed in order of 
increasing length of words,- but that is inessential; any random order would have done, 
and that sort of idea is the crucial point that many readers missed. Another rather elegant 
solution was sent in by Martin Weichert of Munich. It runs this way (slightly modified by 

Alphabetize and append, copied in quotes, these words: "these append, in 
Alphabetize and words: quotes, copied" 

It works on the same principle as Palmer's sentence, and again features a seed whose 
internal structure (at least at the word level) is irrelevant to successful self-replication. 
Weichert also sent along an intriguing palindromic solution in Esperanto, in which the 
flexible word order of the language plays a key role. Michael Borowitz and Bob Stein of 
Durham, North Carolina sent in a solution similar to Palmer's. 

* * * 

Finally, last year's gold-medal winner for self-documentation, Lee Sallows, was a 
bit piqued by my suggestion that the gold on his medal was somewhat tarnished since he 
had not paid close enough attention to the use-mention distinction. Apparently I goaded 
him into constructing an even more elaborate self-documenting sentence. Although it 
does not quite fit what I had in mind for the Von Neumann Challenge, as it does not spell 
out its own construction explicitly at the letter level or word level, it is another marvelous 
Sallowsian gem, and I shall therefore generously allow the gold on his medal to go 
untarnished this year. (Apologies to those purists who insist that gold doesn't tarnish. I 
must have been confusing it with copper and silver. How silly of me!) Herewith follows 
Sallows' 1982 contribution: 

On Viral Sentences and Self-Replicating Structures 



down ten 'a 's, 
eight 'c's, ten Vs, 
fifty-two 'e's, thirty-eight fs, 
sixteen g's, thirty 'h 's, forty-eight 'i's, 
six I's, four 'm's, thirty-two n's, forty-four 'o's, 
four Ps, four 'q's, forty-two 'r's, eighty-four 's's, 
seventy-six 't's, twenty-eight 'us, four 'v s, four 'W's, 
eighteen 'w's, fourteen 'x's, thirty-two y's, four ':s, 
four '*'s, twenty-six '-'s, fifty-eight ', s, 
sixty '"s and sixty '"s, in a 
palindromic sequence 
whose second 
half runs 
snur Jah 
dnoces esohw 
ecneuqes cimordnilap 
a ni s "' ytxis dna s "' ytxis 
,S',' thgie-ytf,s'-' xis-ytnewt ,s'*' ruof 
,s':' ruof,s y' owt-ytriht ,s'x' neetruof,s'w' neethgie 
,s'W roof s V ruof ,s'u' thgie-ytnewt ,s't' xis-ytneves 
s's' ruof-ythgie ,s 'r' owt-ytrof ,s 'q' ruof ,s p' ruof 
,s 'o' ruof-ytrof ,s'n' owl-ytriht ,s 'm' ruof s T xis 
,s'i' thgie-ytrof ,s'h' ytriht s g' neetxis 
,s f thgie-ytriht ,s'e' owt-ytfif 
,s d' net ,s'c' thgie 
s' a' net nwod 

Post Scriptum 

After writing this column, I received much mail testifying to the fact that there are 
a large number of people who have been infected by the "meme" meme. Arel Lucas 
suggested that the discipline that studies memes and their connections to humans and 
other potential carriers of them be known as memetics, by analogy with "genetics". I 
think this is a good suggestion, and hope it will be adopted. 

Maurice Gueron wrote me from Paris to tell me that he believed the first clear 
exposition of the idea of self-reproducing ideas that inhabit the brains 

On Viral Sentences and Self-Replicating Structures 


of organisms was put forward in 1952 by Pierre Auger, a physicist at the Sorbonne, in his 
book L'homme microscopique. Cueron sent me a photocopy of the relevant portions, and 
I could indeed see how prophetic the book was. 

I received a copy of the book General Theory of Evolution by Vilmos Csdnyi, a 
Hungarian geneticist. In this book, he attempts to work out a theory in which memes and 
genes evolve in parallel. A similar attempt is made in the book Ever-Expanding 
Horizons: The Dual Informational Sources of Human Evolution, by the American 
biologist Carl B. Swanson. 

The most thorough-going research on the topic of pure memetics I have yet run 
across is that of Aaron Lynch, an engineering physicist at Fermilab in Illinois, who in his 
spare time is writing a book called Abstract Evolution. The portions that I have read go 
very carefully into the many "options", to speak anthropomorphically, that are open to a 
meme for getting itself reproduced over and over in the ideosphere (a term Lynch and I 
invented independently). It promises to be a provocative book, and I look forward to its 

* * * 

Jay Hook, a mathematics graduate student, was provoked by the solutions to the 
Von Neumann Challenge as follows: 

The notion that it takes two to reproduce is suggestive. Perhaps a change in 
terminology is appropriate. The component that you call the "seed" might be 
thought of as the "female" fragment-the egg that grows into an adult, but only after 
receiving instructions from the sperm, the "male" fragment-the building rule. In this 
interpretation, our sentences say everything twice because they are hermaphroditic: 
the male and female fragments appear together in the same individual. 

To better mimic nature, we should construct pairs of sentences or phrases, 
one male and one female-expressions that taken individually produce nothing but 
when put together in a dark room make copies of themselves. I propose the 
following. The male fragment 

After alphabetizing and deitalicizing, duplicate female fragment in its original 

doesn't seem to say much by itself, and the female fragment 

in and its After female fragment original version, duplicate alphabetizing 

certainly doesn't, but let them at each other and watch the fireworks. (I 
follow your practice of assuming each punctuation mark to be attached to the 
preceding word.) The male takes the lead, and sets to work on the female. First we 
alphabetize and deitalicize her, he says; that gives a new male fragment. 

On Viral Sentences and Self-Replicating Structures 


Then we simply make a copy of her- so we get one of each! 

Nature still doesn't work this way, of course; it's not clear that couples that 
produce offspring only in boy-girl pairs are really superior to self-replicating 
hermaphrodites. Ideally, our fragments should produce either a copy of the male or 
a copy of the female, depending on, say, the day of the week or the parity of some 
external index like the integer part of the current Dow Jones Industrial Average. 
Surprisingly, this isn't hard. Take the male to be 

Alphabetize and deitalicize female fragment if index is odd; otherwise 
reproduce same verbatim. 

and take for the female 

if is and odd; same index female fragment otherwise reproduce verbatim. 
Alphabetize deitalicize 

One more refinement. To this point, each offspring has been exactly 
identical to one of its parents. We can introduce variation, at least in the girls, as 
follows. Male fragment: 

Alphabetize and deitalicize female fragment if index is odd; otherwise 
randomly rearrange the words. 

Female fragment: 

if is and the odd* index female words, fragment randomly otherwise rearrange 
Alphabetize deitalicize 

Now all of the boys will be the spittin' image of their father, but whereas one 
daughter might be 

index rearrange if the Alphabetize randomly fragment odd,- deitalicize is and 
words, otherwise female 

another might be 

Alphabetize index and rearrange the fragment if female is odd; otherwise 
randomly deitalicise words. 

The important point, however, is that all of these female offspring, however 
diverse, are genetically capable of mating with any of the (identical) males. Can 
you find a way to introduce variation in the males without producing sterile 

In conclusion, allow me to observe that the Dow closed on Friday at 1076.0. 
Therefore I proudly proclaim: It's a girl! 

* * * 

On Viral Sentences and Self-Replicating Structures 


I now close by returning to Lee Sallows. This indefatigable researcher of what he calls 
logological space continued his quest after the holy grail of perfect self-documentation. 
His jealousy was aroused in the extreme when Rudy Kousbroek, who is Dutch, and Sarah 
Hart, who is English, together tossed off what Sallows terms "the greatest logological 
jewel the world has ever seen". Kousbroek and Hart's self -documenting sentence, though 
in Dutch, ought to be pretty clearly understandable by anyone who takes the time to look 
at it carefully: 

Dit pangram bevat vijf a's, twee b's, twee c's, drie d's, zesenveertig e's, vijf f 's, vier 
g's, twee h's, vijftien i's, vier j's, een k, twee l's, twee m's, zeventien n's, een o, twee 
p's, een q, zeven r's, vierentwintig s's, zestien t's, een u, elf v's, acht w's, een x, een 
y, en zes z's. 

In fact, you can learn how to count in Dutch by studying it! 

There's not an ounce of fat or awkwardness in this sentence, and it drove 
Sallows mad that he couldn't come up with an equally perfect pangram (sentence 
containing every letter of the alphabet) in English. Every attempt 

had some flaw in it. So in desperation, Sallows, electronics engineer that he is, decided he 
would design a high-speed dedicated "letter-crunching" 

machine to search the far reaches of logological space for an equivalent English sentence. 
Sallows sent me some material on his Pangram Machine. 
He says: 

At the heart of the beast is a clock-driven cascade of sixteen Johnson-counters: the 
electronic analogue of a stepper- motor-driven stack of combination lock-discs. 
Every tick of the clock clicks in a new combination of numbers: a unique 
combination of counter output lines becomes activated .... Pilot tests have been 
surprisingly encouraging; it looks as though a clock frequency of a million 
combinations per second is quite realistic. Even so it would take 317 years to 
explore the ten-deep stratum. But does it have to be ten? With this reduced to a 
modest but still very worthwhile six-deep range it will take just 32.6 days. Now 
we're talking! 

Over the past eight weeks I have devoted every spare second to constructing this 
rocket for exploring the far regions of logological space .... Will it really fly? So far 
it looks very promising. And the end is already in sight. With a bit of luck Rudy 
Kousbroek will be able to launch the machine on its 32-day journey when he comes 
to visit here at the end of this month. If so, a bottle of champagne will not be out of 

Two months later, I got a most excited transmission from Lee, which began with the 
word "EUREKA! "-the word the Pangram Machine was set up to print on success. He 
then presented three pangrams that his machine had discovered, floating "out there" 
somewhere beyond the orbit of Pluto 
My favorite one is this: 

On Viral Sentences and Self-Replicating Structures 


This pangram tallies five a's, one b, one c, two d's, twenty-eight e's, eight fs, six g's, 
eight h's, thirteen i's, one j, one k, three l's, two m's, eighteen n's, fifteen 
O's, two p's, one q, seven r's, twenty-five s's, twenty-two. t's, four u's, four v's, nine 
w's, two X's, four y's, and one z. 

Now that's what I call a success for mechanical translation! 

Sallows writes: "I wager ten guilders that nobody will succeed in producing a 
perfect self-documenting solution (or proof of its non-existence) to the sentence 
beginning, v This computer- generated pangram contains ...'within the next ten years. No 
tricks allowed. The format to be exactly as in the above pangrams. Either v and' or v &' is 
permissible. Result to be derived exclusively by von Neumann architecture digital 
computer (no super computers, no parallel processing). Fancy your chances?" Anyone 
who wants to write to Sallows can do so, at Buurmansweg 30, 6525 RW Nijmegen, 

Much though I am delighted by Sallows' ingenious machine and his plucky 
challenge, I expect him to lose his wager before you can say "Raphael Robinson". For my 
reasons, see the postscript to Chapter 16. 

On Viral Sentences and Self-Replicating Structures 



Nomic: A Self-Modifying Game 
Based on Reflexivity in Law 

June, 1982 

IN his excellent book A Profile of Mathematical Logic, the philosopher Howard 
DeLong tells the following classic story of ancient Greece. "Protagoras had contracted to 
teach Euathlus rhetoric so that he could become a lawyer. Euathlus initially paid only half 
of the large fee, and they agreed that the second installment should be paid after Euathlus 
had won his first case in court. Euathlus, however, delayed going into practice for quite 
some time. Protagoras, worrying about his reputation as well as wanting the money, 
decided to sue. In court Protagoras argued: 

Euathlus maintains he should not pay me but this is absurd. For suppose he wins 
this case. Since this is his maiden appearance in court he then ought to pay me 
because he won his first case. On the other hand, suppose he loses his case. Then he 
ought to pay me by the judgment of the court. Since he must either win or lose the 
case he must pay me. 

Euathlus had been a good student and was able to answer Protagoras' argument with a 
similar one of his own: 

Protagoras maintains that I should pay him but it is this which is absurd. For 
suppose he wins this case. Since I will not have won my first case I do not need to 
pay him according to our agreement. On the other hand, suppose he loses the case. 
Then I do not have to pay him by judgment of the court. Since he must either win 
or lose I do not have to pay him." 

Then DeLong adds, "It is clear that to straighten out such puzzles one has to inquire 
into general procedures of argument." Actually, to many people, it is not at all clear that 
general procedures of argument will need scrutiny-quite the contrary. To many people, 
paradoxes such as this one appear to be mere pimples or blemishes on the face of the law, 
which can be removed by simple cosmetic surgery. Similarly, many people who take 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


theology seriously think that paradoxical questions about omnipotence, such as "Can God 
make a stone so heavy that It cannot lift it?", are just childish riddles, not serious 
theological dilemmas, and can be resolved in a definitive and easy way. Throughout 
history, simplistic or patchwork remedies have been proposed for all kinds of dilemmas 
created by loops of this sort. Bertrand Russell's theory of types is a famous example in 
logic. But the dreaded loops just won't go away that easily, however, as Russell found 
out. Wherever they occur, they are deep and pervasive, and attempts to unravel them lead 
down unexpected pathways. 

In fact, reflexivity dilemmas of the Protagoras- vs. -Euathlus type and problems of 
conflicting omnipotence crop up with astonishing regularity in the down-to-earth 
discipline of law. Yet until recently, their central importance in defining the nature of law 
has been little noticed. In the past few years, only a handful of specialized papers on the 
subject have appeared in law journals and philosophy journals. 

It was with surprise and delight, therefore, that I learned that an entire book on the 
role of reflexivity in law was in preparation. I first received word of it- "The Paradox of 
Self- Amendment: A Study of Logic, Law, Omnipotence, and Change"-in a letter from its 
author, Peter Suber, who identified himself as a philosophy Ph.D. and lawyer now 
teaching philosophy at Earlham College in Richmond, Indiana. He hopes "The Paradox 
of Self-Amendment" will be out soon. 

In correspondence with Suber, I have found out that he has an even more ambitious book 
in the works, tentatively titled "The Anatomy of Reflexivity", which is a study of 
reflexivity in its broadest sense, encompassing, as he says, "the self-reference of signs, 
the self-applicability of principles, the self justification and self-refutation of propositions 
and inferences, the self-creation and self-destruction of legal and logical entities, the self- 
limitation and self-augmentation of powers, circular reasoning, circular causation, vicious 
and benign circles, feedback systems, mutual dependency, reciprocity, and organic form." 

In his original letter to me, Suber not only gave a number of interesting, examples 
of self-reference in law but also presented a game he calls Nomic (from the Greek v6 LoS 
(n6mos), meaning "law") which is presented in an appendix to The Paradox of Self- 
Amendment. I found reading the rules of Nomic to be a mind-opening experience. Much 
of this article will be devoted to Nomic, but before we tackle the game itself, I would like 
to set the stage by mentioning some other examples of reflexivity in the political arena. 

* * * 

My friend Scott Buresh, himself a lawyer, described the following perplexing 
hypothetical dilemma, which he first heard posed in a class on constitutional law. What if 
Congress passes a law saying that henceforth all determinations by the Supreme Court 
shall be made by a 6-3 majority 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


(rather than a simple 5-4 majority, as is currently the case)? Imagine that this law is 
challenged in a court case that eventually makes its way up to the Supreme Court itself, 
and that the Supreme Court rules that the law is unconstitutional-and needless to say the 
ruling is by a 5-4 majority. What happens? This is a classic paradox of the separation of 
powers and it was nearly played out, in a minor variation, during the Watergate era, when 
President Nixon threatened he would obey a Supreme Court ruling to turn over his tapes 
only if it were "definitive", which presumably meant something like a unanimous 

It is interesting to note that conservatives are now trying to limit the jurisdiction 
of the Supreme Court over issues such as abortion and prayer in the schools. 
Constitutional scholars expect that a showdown might ensue if Congress passes such a 
statute and the Supreme Court is asked to review its constitutionality. 

Conflicts that enmesh the Supreme Court with itself can arise in less flashy ways. 
Suppose the Supreme Court proposes to build an annex in an area that environmentalists 
want to protect. The environmentalists take their case to court, and it gets blown up into a 
large affair that eventually reaches the level of the Supreme Court. What happens? 
Clearly the reason this kind of thing cannot be prevented is that any court is itself a part 
of society, with buildings, employees, contracts, and so on. And since the law deals with 
things of this kind, no court at any level can guarantee that it will never get ensnared in 
legal problems. 

If self-ensnaredness is a rare event for the Supreme Court, it is not so rare for 
other arms of government. An interesting case came up recently in San Francisco. There 
had been a large number of complaints about the way the police department was handling 
cases, and so an introverted "Internal Affairs Bureau" was set up to look into such matters 
as police brutality. But then, inevitably, complaints arose that the Internal Affairs Bureau 
was whitewashing its findings, and so Mayor Dianne Feinstein set up a doubly- 
introverted committee, again internal to the police department, to investigate the 
performance of the Internal Affairs Bureau. The last I heard was that the report of this 
committee was unfavorable. What finally resulted I do not know. 

Parliamentary procedure too can lead to the most tangled of situations. For 
example, there are several editions of Robert's Rules of Order, and a body must choose 
which set of rules will govern its deliberations. The latest edition of Robert's Rules states 
that if no specific edition is chosen as the governing one, then the most recent issue holds. 
A problem arises, though, if one hasn't adopted the latest edition, since one cannot then 
rely on its authority to tell one to rely on it. 

In some ways, parliamentary procedure, which deals with how to handle 
simultaneous and competing claims for attention, bears a remarkable resemblance to the 
way a large computer system must manage its own internal affairs. Within such a system, 
there is always a program called an 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


operating system with a part called the scheduling algorithm, which weighs priorities and 
decides which activity will proceed next. In a "multiprocessing" system, this means 
determining' which activity gets the next "time slice" (lasting for anywhere from a 
millisecond to a few seconds, or possibly even for an unlimited time, depending on the 
activity's priority and numerous other factors). But there are also interrupts that come and 
interfere with-cops, just a moment, my telephone's ringing. Be right back. There. Sorry 
we were disturbed. Someone wanted to sell me a telephone-answering system. Now what 
would-ah, ah, just a sec-ah-choo! -sorry- what would I do with one of those things? Now 
where was I? Oh, yes-interrupts. Well, in a way they are like telephone calls that take the 
store clerk away from you, annoying you in the extreme, since you have come to the store 
in person, whereas the telephone caller has been lazy and yet is given higher priority. 

A good scheduling algorithm strives to be equitable, but all kinds of conflicts can 
arise, in which interrupts interrupt interrupts and are then themselves interrupted. 
Moreover, the scheduler has to be able to run its own internal decision-making programs 
with high priority, yet not so high a priority that nothing else ever runs. Sometimes the 
internal and external priorities can become so tangled that the entire system begins to 
"thrash". This is the term used to describe a situation where the operating system is 
spending most of its time bogged down in "introverted" computation, deciding what it 
should spend its time doing. Needless to say, during periods of thrashing, very little 
"real" computation gets done. It sounds quite like the cognitive state a person can get into 
when too many factors are weighing down all at once and the slightest thought on any 
topic seems to trigger a rash of paradoxical dilemmas from which there is no escape. 
Sometimes the only solution is to go to sleep, and let the paradoxes somehow drift away 
into a better perspective. 

* * * 

Operating systems and courts of law cannot, unfortunately, go to sleep. Their 
snarls are very real, and some means of dealing with them has to be invented. It was 
considerations such as this that led Peter Suber to invent his tangled game of Nomic. 

He writes that he was struck by the oft-heard cynicism that "Government is just a 
game." Now, one essential activity of government is law-making, so if it is a game, then 
it is a game in which changing the laws (or rules) is a move. Moreover, some rules are 
needed to structure the process of changing the rules. Yet no legal system seems to have 
any rules that are absolutely immune to legal change. Suber's main aim, he wrote, was "to 
make a playable game that models this particular situation. But whereas governments are 
at any given moment pushed in various directions in their rule-changing by historical 
realities and the ideology of their people and 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


existing rules, I wanted the game to start with as v clean' an initial set of rules as possible." 
Nomic is such a game, and its rules (or rather, its Initial Set of rules) will be presented 
below. Most of the following description is in essence by Suber himself. I have simply 
interspersed some of my own observations. 

In legal systems, statutes are the paradigmatic rules. Statutes are made by a rule- 
governed process that is itself partly statutory; hence the power to make and change 
statutes can reach some of the rules governing the process itself. Most of the rules, 
however, that govern the making of statutes are constitutional and are therefore beyond 
the reach of the power they govern. For instance, Congress may change its parliamentary 
rules and its committee structure, and it may bind its future action by its past action, but it 
cannot, through mere statutes, alter the fact that a two-thirds "supermajority" is needed to 
override an executive veto, nor can it abolish or circumvent one of its houses, start a tax 
bill in the Senate, or even delegate too much of its power to experts. 

Although statutes cannot affect constitutional rules, the latter can affect the 
former. This is an important difference of logical priority. When there is a conflict 
between rules of different types, the constitutional rules always prevail. This logical 
level-distinction is matched by apolitical level-distinction -namely, that the logically prior 
(constitutional) rules are more difficult to amend than the logically posterior (statutory) 

It is no coincidence that logically prior laws are harder to amend. One purpose of 
making some rules more difficult to change than others is to prevent a brief wave of 
fanaticism from undoing decades or even centuries of progress. This could be called 
"self -paternalism": a deliberate retreat from democratic principles, although one chosen 
for the sake of preserving democracy. It is our chosen insurance against our anticipated 
weak moments. But that purpose will not be met unless the two-tier (or multi-tier) system 
also creates a logical hierarchy in which the less mutable rules take logical priority over 
the more mutable rules; otherwise, the more mutable rules could by themselves undo the 
deeper and more abstract principles on which the whole system is based. If 
supermajorities and the concurrence of many bodies are necessary to protect the 
foundations of the system from hasty change, that protective purpose is frustrated if those 
foundations are reachable by rules requiring merely a simple majority of one legislature. 

Although all the rules in the American system are mutable, it is convenient to 
refer to the less mutable constitutional rules as immutable, and to the more mutable rules 
below them in the hierarchy as mutable. The same is true in Nomic, where, at least 
initially, no rule is literally immutable. If Nomic's self-paternalism is to be effective, then, 
its "immutable" rules, in addition to resisting easy amendment, must possess logical 

Many designs could satisfy this requirement. Nomic has adopted a simple two- 
tiered system, modeled to some extent on the U.S. Constitution. In principle, a system 
could have any number of degrees of difficulty in the 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


amendment of rules. For instance, Class A rules, the hardest to amend, could require 
unanimity of a central body and the unanimous concurrence of all regional bodies. Class 
B rules could require 90 percent supermajorities, Class C rules 80 percent supermaj ori 
ties, and so on. The number of such categories could be indefinitely large. 

Indeed, if appropriate qualifications are made for the informality of custom and 
etiquette, a strong argument could be made that normal social life is just such a system of 
indefinite tiers. Near the top of the "difficult" end of the series of rules are actual laws, 
rising through case precedents, regulations, and statutes, all the way up to constitutional 
rules. At the bottom of the scale are rules of personal behavior that individuals can amend 
unilaterally without incurring disapprobation or censure. Above these are rules for which 
amendment is increasingly costly, starting with costs on the order of furrowed brows and 
clucked tongues, and passing through indignant blows and vengeful homicide. 

In any case, for the sake of simplicity and to make it easier to learn and play, 
Nomic is a clean two-tier system rather than a nuanced or multi-tier system like the U.S. 
Government, with its intermediate and substatutory levels such as parliamentary rules, 
administrative regulations, joint resolutions, treaties, executive agreements, higher and 
lower court decisions, state practice, judicial rules of procedure and evidence, executive 
orders, canons of professional responsibility, evidentiary presumptions, standards of 
reasonableness, rules establishing priority among rules, canons of interpretation, 
contractual rules, and so on. This is not to say that nuanced, intermediate levels may not 
arise in Nomic through game custom and tacit understandings. In fact, the nature of the 
game allows players to add new tiers by explicit amendment as they see fit, and one 
reason for making Nomic simple initially is that it is easier to add tiers to a simple game 
than it is to subtract them from a complex one. 

Nomic's two-tier system embodies the same self -paternalistic elements as does the 
Federal Constitution. The "immutable" rules govern more basic processes than the 
"mutable" ones do, and thus shield them from hasty change. Since, in the course of play, 
the central core of the game may change (and the minor aspects must change), after a few 
rounds the game being played by the players may in a certain sense be different from the 
one they were playing when they started. Yet needless to say, whatever results from 
compliance with the rules is, by definition, the game Nomic. The "feel" of the game may 
change drastically even as, at a deeper level, the game remains the same. 

In a similar way, human beings undergo constant development and self- 
modification, and yet continue to be convinced that it makes sense to refer, via such 
words as "I", to an underlying stable entity. The more 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


immediately perceptible patterns change, whereas deeper and more hidden patterns 
remain the same. From birth to maturity to death, however, the changes can be so radical 
that one may sometimes feel that in a single lifetime one is several different people. 
Similarly, in law, many have acknowledged that an amendment clause (a clause defining 
how a constitution may be amended)-even a clause limited to piecemeal amendment- 
could, through repeated application, create a fundamentally new constitution. 

The fact that Nomic has more than one tier prevents the 'logical foundation of the 
game-the central core-from changing radically in just a few moves. Such continuity is a 
virtue both of games and of governments, but players of Nomic have an advantage over 
citizens in that, whenever they are so motivated, they can adjust the degree of continuity 
and the rate of change rather quickly, using their wits, whereas in real life the 
mechanisms by which such change could be effected are barely known and partially 
beyond reach. 

Standard games possess the continuity of unchanging rules, or at least of rules that 
change only between games, not during them. Nomic's continuity is more like that of a 
legal system than that of a standard game: it is a rule-governed set of systems, directives, 
and processes undergoing constant rule-governed change. If, however, one wants a 
specific entity to point to as being "Nomic itself", the Initial Set of rules, as presented 
below, will do. Yet Nomic is equally the product, at any given moment, of the dynamic 
rule-governed change of the Initial Set. The continuing identity of the game, like that of a 
nation or person, is due to the fact (if fact it is) that all change is the product of existing 
rules properly applied, and that no change is revolutionary. (One could even argue that 
revolutionary change is just more of the same: In a revolution, rules that have been 
assumed to be totally immutable simply are rendered mutable by other rules that are more 
deeply immutable, but that previously had been taken for granted and hence had been 
invisible, or tacit.) 

* * * 

In its Rule 212, Nomic includes provision for subjective judgment (as in a court 
of law), not merely to imitate government in yet another aspect, but for the same reasons 
that compel government itself to make provisions for judgment: rules will inevitably be 
made that are ambiguous, inconsistent, or incomplete, or that require application to 
individual circumstance. "Play" must not be interrupted; therefore some agency must be 
empowered to make an authoritative and final determination so that play can continue. 

Judgments in Nomic are not bound by rules of precedent, since that would require 
a daunting amount of record-keeping for each game. But the doctrine of stare decisus 
(namely, that precedents should be followed) may be imposed at the players' option, or it 
may arise without explicit amendment, 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


as successive judges feel impelled to treat "similarly situated" persons "similarly". 
(Admittedly, the meanings of these terms in specific cases may well require further levels 
of judgment. This fact is one of the most dangerous sources of potential infinite regress in 
real court cases.) Without stare decisis, the players are constrained to draft their rules 
carefully, make thoughtful adjudications, overrule poor judgments, and amend defective 
rules. This is one way Nomic teaches basic principles and exigencies of law, even as it 
vastly simplifies. 

The Initial Set must be short and simple enough to encourage play, yet long and 
complex enough to cover contingencies likely to arise before the players get around to 
providing for them in a rule, and to prevent any single rule change from disturbing the 
continuity of the game. Whether the Initial Set presented below satisfies these competing 
interests is left to players to judge. 

One contingency deliberately left to the players to resolve is what to do about 
violations of the rules. The players must also decide whether old violations are protected 
by a statute of limitations or whether they may still be punished or nullified. Whether the 
likelihood of compliance and the discretionary power of the judge suffice to deal with a 
crisis of confidence or to delay it until a rule can take over, and whether in other respects 
the Initial Set satisfactorily balances the competing interests of simplicity and 
complexity, can best be determined by playing the game. 

* * * 

Nomic affords a curious twist on one common and fundamental property of 
games: it allows the blurring of the distinction between constitutive rules and rules of 
skill-that is, between rules that define lawful play and those that define artful play. In 
other words, in Nomic there is a blurring between the permissible and the optimal. 

Most games do not embrace non-play, and do not become paradoxical by seeming 
to. Interestingly, however, children often invent games that provide game penalties for 
declining to play, or that incorporate or extend game jurisdiction to all of "real life", and 
end only when the children tire of the game or forget they are playing. ("Daddy, Daddy, 
come play a new game we invented!" "No, sweetheart, I'm reading." "That's ten points!") 
Nomic carries this principle to an extreme. A game of Nomic can embrace anything at the 
vote of the players. The line between play and non-play may shift at each turn, or it may 
apparently be eliminated. Players may be governed by the game when they think they are 
between games or when they think they have quit. 

For most games, there is an infallible decision procedure to determine the legality 
of a move. In Nomic, by contrast, situations may easily arise where it is very hard to 
determine whether or not a move is legal. Moreover, paradoxes can arise in Nomic that 
paralyze judgment. Occasionally this will 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


be due to the poor drafting of a rule, but it may also arise from a rule that is unambiguous 
but mischievous. The variety of such paradoxes is truly impossible to anticipate. Rule 
213, nonetheless, is designed to cope with them as well as possible without cluttering the 
Initial Set with too many legalistic qualifications. Note that Rule 213 allows a wily player 
to create a paradox, get it passed (if the rule seems innocent enough to the other players), 
and thereby win. 

So much for a general prologue to the game itself. Now we can move on to a 
description of how a game of Nomic is played. To reiterate, Nomic is a game in which 
changing the rules is a move. Two can play, but having three or more makes for a better 
game. The gist of Nomic is to be found in Rule 202, which should be read first. Players 
will need paper and pencil, and (at least at the outset!) one die. Instead of sheets of paper, 
players may find it easier to use a set of index cards. All new rules and amendments are 
to be written down. How the rules are positioned on paper or on the table can indicate 
which ones are currently immutable and which ones are mutable. Amendments can be 
placed on top of or next to the rules they amend. Inoperative rules may simply be deleted. 
Alternatively, for more complex games, players may prefer to transcribe into their own 
notebooks the text of each new rule or amendment and to keep a separate list, by number, 
of the rules still in effect. Ideally, perhaps, all rules should be entered in a computer, with 
a terminal for each player; amendments could then be incorporated instantly into the 
main text, with a corresponding adjustment to the numerical order. 

Initial Set of Rules of Nomic 

1. Immutable Rules 

101. All players must always abide by all the rules then in effect, in the form in 
which they are then in effect. The rules in the Initial Set are in effect whenever a 
game begins. The Initial Set consists of Rules 101-116 (immutable) and 201-213 

102. Initially, rules in the 100's are immutable and rules in the 200's are mutable. 
Rules subsequently enacted or transmuted (i.e., changed from immutable to 
mutable or vice versa) may be immutable or mutable regardless of their 
numbers, and rules in the Initial Set may be transmuted regardless of their 

103. A rule change is any of the following: (1) the enactment, repeal, or amendment 
of a mutable rule; (2) the enactment, repeal, or amendment of an amendment, or 
(3) the transmutation of an immutable rule into a mutable rule, or vice versa. 
(Note: This definition implies that, at least initially, all new rules are mutable. 
Immutable rules, as long as they are immutable, may not be amended or 
repealed; mutable rules, as long as they are mutable, may be amended or 
repealed. No rule is absolutely immune to change.) 

104. All rule changes proposed in the proper way shall be voted on. They will be 
adopted if and only if they receive the required number of votes. 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


105. Every player is an eligible voter. Every eligible voter must participate in every 
vote on rule changes. 

106. Any proposed rule change must be written down before it is voted on. If 
adopted, it must guide play in the form in which it was voted on. 

107. No rule change may take effect earlier than the moment of the completion of 
the vote that adopted it, even if its wording explicitly states otherwise. No rule 
change may have retroactive application. 

108. Each proposed rule change shall be given a rank-order number (ordinal 
number) for reference. The numbers shall begin with 301, and each rule change 
proposed in the proper way shall receive the next successive integer, whether or 
not the proposal is adopted. 

If a rule is repealed and then re-enacted, it receives the ordinal number of the 
proposal to re-enact it. If a rule is amended or transmuted, it receives the ordinal 
number of the, proposal to amend or transmute it. If an amendment is amended 
or repealed, the entire rule of which it is a part receives the ordinal number of 
the proposal to amend or repeal the amendment. 

109. Rule changes that transmute immutable rules into mutable rules may be 
adopted if and only if the vote is unanimous among the eligible voters. 

110. Mutable rules that are inconsistent in any way with some immutable rule 
(except by proposing to transmute it) are wholly void and without effect. They 
do not implicitly transmute immutable rules into mutable rules and at the same 
time amend them. Rule changes that transmute immutable rules into mutable 
rules will be effective if and only if they explicitly state their transmuting effect. 

111. If a rule change as proposed is unclear, ambiguous, paradoxical, or destructive 
of play, or if it arguably consists of two or more rule changes compounded or is 
an amendment that makes no difference, or if it is otherwise of questionable 
value, then the other players may suggest amendments or argue against the 
proposal before the vote. A reasonable amount of time must be allowed for this 
debate. The proponent decides the final form in which the proposal is to be 
voted on and decides the time to end debate and vote. The only cure for a bad 
proposal is prevention: a negative vote. 

112. The state of affairs that constitutes winning may not be changed from 
achieving n points to any other state of affairs. However, the magnitude of n and 
the means of earning points may be changed, and rules that establish a winner 
when play cannot continue may be enacted and (while they are mutable) be 
amended or repealed. 

1 13. A player always has the option to forfeit the game rather than continue to play 
or incur a game penalty. No penalty worse than losing, in the judgment of the 
player to incur it, may be imposed. 

114. There must always be at least one mutable rule. The adoption of rule changes 
must never become completely impermissible. 

115. Rule changes that affect rules needed to allow or apply rule changes are as 
permissible as other rule changes. Even rule changes that amend or repeal their 
own authority are permissible. No rule change or type of move is impermissible 
solely on account of the self-reference or self-application of a rule. 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


116. Whatever is not explicitly prohibited or regulated by a rule is permitted and 
unregulated, with the sole exception of changing the rules, which is permitted 
only when a rule or set of rules explicitly or implicitly permits it. 

n. Mutable Rules 

201. Players shall alternate in clockwise order, taking one whole turn apiece. Turns 
may not be skipped or passed, and parts of turns may not be omitted. All players 
begin with zero points. 

202. One turn consists of two parts, in this order: (1) proposing one rule change and 
having it voted on, and (2) throwing one die once and adding the number of 
points on its face to one's score. 

203. A rule change is adopted if and only if the vote is unanimous among the 
eligible voters. 

204. If and when rule changes can be adopted without unanimity, the players who 
vote against winning proposals shall receive 10 points apiece. 

205. An adopted rule change takes full effect at the moment of the completion of 
the vote that adopted it. 

206. When a proposed rule change is defeated, the player who proposed it loses 10 

207. Each player always has exactly one vote. 

208. The winner is the first player to achieve 100 (positive) points. 

209. At no time may there be more than 25 mutable rules. 

210. Players may not conspire or consult on the making of future rule changes 
unless they are teammates. 

211. If two or more mutable rules conflict with one another, or if two or more 
immutable rules conflict with one another, then the rule with the lowest ordinal 
number takes precedence. 

If at least one of the rules in conflict explicitly says of itself that it defers tog 
another rule (or type of rule) or takes precedence over another rule (or type of 
rule), then such provisions shall supersede the numerical method for determining 

If two or more rules claim to take precedence over one another or to defer to 
one another, then the numerical method must again govern. 

212. If players disagree about the legality of a move or the interpretation or 
application of a rule, then the player preceding the one moving is to be the Judge 
and to decide the question. Disagreement, for the purposes of this rule, may be 
created by the insistence of any player. Such a process is called invoking 

When judgment has been invoked, the next player may not begin his or her 
turn without the consent of a majority of the other players. 

The judge's judgment may be overruled only by a unanimous vote of the other 
players, taken before the next turn is begun. If a judge's judgment is overruled, 
the player preceding the Judge in the playing order becomes the new judge for 
the question, and so on, except that no player is to be judge during his or her 
own turn or during the turn of a teammate. 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


Unless a judge is overruled, one Judge settles all questions arising from the 
game until the next turn is begun, including questions as to his or her own 
legitimacy and jurisdiction as judge 

New judges are not bound by the decisions of old judges. New judges may, 
however, settle only those questions on which the players currently disagree and 
that affect the completion of the turn in which judgment was invoked. All 
decisions by Judges shall be in accordance with all the rules then in effect; but 
when the rules are silent, inconsistent, or unclear on the point at issue, then the 
judge's only guides shall be common morality, common logic, and the spirit of 
the game. 

213. If the rules are changed so that further play is impossible, or if the legality of a 
move is impossible to determine with finality, or if by the judge's best reasoning, 
not overruled, a move appears equally, legal and illegal, then the first player who 
is unable to complete a turn is the winner 
This rule takes precedence over every other rule determining the winner. 

Whew! So there you have the rules of Nomic. After reading them, a friend of 
mine commented, "It won't ever replace Monopoly." I'll grant the truth of that, but it is 
certainly more interesting than Monopoly to contemplate playing! To make such 
contemplation even more intriguing, Suber, who has actually played this crazy- sounding 
game, offers a wide variety of suggestions for interesting types of rule changes. Here are 
some samples. 

Make mutable rules easier to amend than immutable rules, by repealing the 
unanimity requirement of Initial Rule 203 and substituting (say) a simple majority. Add 
new tiers above, below, or between the two tiers with which Nomic begins. Make some 
rules amendable only by special procedures ("incomplete self-entrenchment"). Devise 
"sunset" rules that automatically expire after a certain number of turns. Allow private 
consultation between players on future rule changes ("log-rolling"). Allow secret ballots. 
Allow "constitutional conventions" (or "revolutions") in which all the rules are more 
easily and jointly subject to change according to new, temporary procedures. Put an 
upper limit on the number of initially immutable rules that at any given time may be 
mutable or repealed. 

Allow the ordinal numbers of rules to change in certain contingencies, thereby 
changing their priorities. Or alter the very method of determining precedence; for 
example, make more recent rules take precedence over earlier rules, rather than vice 
versa. (In most actual legal systems, the rule of priority favors recent rules.) 

Convert the point-earning mechanism from one based on randomness to one 
based on skill (intellectual or even athletic). Apply a formula to the number on the die so 
that it will increase the number of points awarded to any player whose proposal gets 
voted down or whose judgment gets overruled, but will decrease the number of points 
awarded to a player who votes nay, who proposes a rule change of more than 50 words, 
who takes more than two minutes to propose a rule change, who proposes to transmute an 
immutable rule to a mutable rule, or who proposes a rule that is enacted but is later 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


Introduce a second or third objective-for example, a cooperative objective, to 
complement the competitive objective of earning more points. Thus, each player might, 
on each turn, contribute a letter to a growing sentence, a line to a growing poem, a block 
to a growing castle, and so on, the group as a whole trying to complete the thing before 
one of them reaches the winning number of points. Or introduce a second competitive 
objective, such as having each player make a move in another game, with the winner (or 
winners) of the game that is .finished first obtaining some predetermined advantage in the 
game that is still being played. Or make some aspect of the game conditional on the 
outcome of a different game, thus incorporating into Nomic any other game or activity 
that can muster enough votes. Similarly, leave Nomic pure but add stakes or drama (such 
as psychodrama). 

Institute team play. Require permanent team combinations or allow alliances to 
shift according to procedures (informal negotiation, an algebraic formula applied to 
scores, or systematic rotation of partners). Create "hidden" partners (e.g., the points a 
player earns in a turn are also added to the score of another player, or split with one, 
selected by a mechanism). 

Extend the aptness of the game as a model of the legislative process by inventing 
an index that goes up and down according to events in the game and that measures 
"constituency pressure" or "constituency satisfaction"; use the index to constrain 
permissible moves (e.g., through a system of rewards and penalties). Allow a certain 
number of turns to pass before a proposal is voted on, giving the players the opportunity 
to see what other proposals may be adopted in its place. 

Suber's ultimate challenge to players of Nomic is this: to ascertain whether any 
rules can be made genuinely immutable while preserving some rule-changing power, and 
whether the power to change the rules can be irrevocably and completely repealed. Suber 
is interested in hearing from readers about their experiences in playing Nomic, as well as 
any suggestions for improvement or comments on reflexivity in law generally. His 
address is: Department of Philosophy, Earlham College, Richmond, Indiana 47374. 

* * * 

The richness of the Nomic universe is abundantly clear. It certainly meets every 
hope I had when, in my book Godel, Escher, Bach: an Eternal Golden Braid, 

I wrote about self-modifying games. It was my purpose there to describe such 
games in the abstract, never imagining that anyone would work out a game so fully in the 
concrete. It had been a dream of mine for a long time to devise a system that was in some 
sense capable of modifying every aspect of itself, so that even if it had what I referred to 
as "inviolate" levels (corresponding roughly to Suber's "immutable" rules), they could be 
modified as well. 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


I vividly remember how this dream came about. I was a high school student when 
I first heard about computers from the late George Forsythe, then a professor of 
mathematics at Stanford (there was no such thing as a department of computer science 
yet). In his guest lecture to our math class he emphasized two things. One was the notion 
that the purpose of computing was to do anything that people could figure out how to 
mechanize. Thus, he pointed out, computing would inexorably make inroads on one new 
domain after another, as we came to recognize that an activity that had seemed to require 
ever-fresh insights and mental imagery could be replaced by an ingenious and subtly 
worked-out collection of rules, the execution of which would then be a form of glorified 
drudgery carried out at the speed of light. For me, one of Forsythe's most stunning 
illustrations of this notion was the way computers had in some sense been applied to 
themselves-namely in compilers, programs that translate programs from an elegant and 
human-readable language into the cryptic strings of O's and I's of machine language. 

The other notion Forsythe emphasized-and it was closely related to the first one- 
was the fact that a program is just an object that sits in a computer's memory, and as such 
is no more and no less subject to manipulation by other programs-or even by itself !-than 
mere numbers are. The fusion of these two notions was what gave me my inspiration to 
design an abstract computer. Playing on the names of the ENIAC, ILLIAC, JOHNNIAC, 
and other computers I had heard of, I called it "IACIAC". I hoped IACIAC could not 
only manipulate its own programs but also redesign itself, change the way it interpreted 
its own instructions, and so on. I quickly ran into many conceptual difficulties and never 
completed the project, but I have never forgotten that fascination. It seems to me that 
although it is a game and not a computer, Nomic comes closer in spirit to that goal I 
sought than anything I have ever encountered. That is, except for itself. 

Post Scriptum. 

As a result of the publication of this column, I received a letter from a law 
professor named William Popkin, who obviously had found the game of Nomic 
fascinating while disagreeing philosophically with some points expressed. Subsequently, 
an exchange between Popkin and me was printed in the "Letters" column in Scientific 
American. Here is what Popkin had to say: 

As a law professor I was very interested in Douglas Hofstadter's piece on 
reflexivity and self-reference in the law. There are, as he says, many examples. 
Article V of the United States Constitution prohibits amendments denying 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


states equal representation in the Senate. The Supreme Court of India went out of its 
way to create a reflexivity problem by deciding that the normal process of amending 
the Indian Constitution did not apply to their Bill of Rights, even though no explicit 
provision prohibiting such amendments existed. 

These reflexivity problems are fascinating, but I do not see what they have to 
do with "general procedures of argument", as Hofstadter (quoting Howard DeLong) 
suggests. They have everything to do with the meaning of rules, law, and politics, but 
not with procedures of argument. Let me explain how at least one law professor would 
approach these problems: Every reflexivity example has the same structure. There is a 
rule that has specific cases coming under the rule. One particular case, by coming 
under the rule, appears to undermine the rule itself. For example, assume that the 
Supreme Court must decide cases properly appealed to it, but that no judge can sit on a 
case in which he is personally interested. A case arises involving the reduction of 
judges' salaries, which is arguably unconstitutional. If the judges decide the case, they 
violate the rule against deciding cases in which they are personally interested, but 
failure to decide violates the rule requiring them to decide cases. The same structure 
exists for rules about amendment of the document containing the amending provision. 
Assume that the Constitution can be amended by a two-thirds vote but that one of the 
provisions requires a 100 percent vote. An amendment is passed changing the 
unanimity rule. If the amendment is valid, the unanimity rule is undermined, but if the 
amendment is invalid, the procedures for amendment are incomplete. 

What is presented in all these cases is a problem of meaning and a conflict 
between rival conclusions, not a logical conundrum. The ultimate decision may be 
hard or easy, but the issues are not difficult to conceptualize. My own conclusion is 
that the Supreme Court should hear the case involving its own salary because we do 
not want Congress deciding such issues, and that the amending power should not 
extend to the unanimity rule because this breaks the social contract. These are hard 
cases, but another example presented in Hofstadter's article is easy. It concerns a 
contract to pay the rhetoric teacher Protagoras when his pupil Euathlus wins his first 
case. The teacher sues the pupil for the payment, figuring that if he wins the suit he 
gets his money and if he loses the suit he collects under the contract. But on what 
possible ground could he win the case before the pupil had won a lawsuit? And how 
could the original contract, in referring to a victory by the pupil as the occasion for the 
payment, include a victory in a frivolous lawsuit by the teacher? 

What I am pointing out is that reflexivity presents problems of choice, 
sometimes difficult, sometimes trivial, but that is nothing new in the law. Most 
important legal problems involve choice without involving reflexivity. Do we prefer a 
right of privacy or freedom of the press? The deeper point concerns the interaction of 
law and artificial intelligence and perhaps interdisciplinary studies generally. 
Reflexivity is undoubtedly an important phenomenon in philosophy for reasons I do 
not fully appreciate. If developments in artificial intelligence are to be useful in law, 
however, they must take into account what legal problems are all about. To a lawyer, 
reflexivity is not a relevant category but choice is. Indeed, I suspect that reflexivity is 
just a diversion for Hofstadter. In an earlier article about analogy he dealt with the 
imaginative problem of defining the First Lady of Britain [see Chapter 24]. He there 
grappled with the 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


problem of deciding what is like something else, which is the way most lawyers 
always proceed in making choices. How we make analogies determines how we make 
choices, and that is the essential nature of all judgment. If that is what artificial 
intelligence is all about, I very much want to hear more. 

As for the question of whether there are immutable rules, the answer is: Of 
course there are, if that's what you want. 

William D. Popkin 
Professor of Law 
Indiana University 

I found this letter very nicely put, and a constructive opening for a small debate. I 
replied as follows: 

Professor Popkin raises a very interesting point in his comment on my column 
about Peter Suber's game Nomic. His point is essentially twofold: (1) The fact that any 
legal system is inevitably chock-full of tangles arising from reflexivity is amusing, but 
rather than being themselves a deep aspect of law, such tangles are a consequence of 
other deep aspects, the most significant of which is that (2) the crux of any legal system is 
the ability of people to distinguish between the incidental qualities and the essential 
qualities of various events and relations, which ability results finally in recognition of 
what a given item is-that is, which category the item belongs to. Popkin calls this 
"choice". In conclusion, he suggests that to discover the principles by which people can 
"choose" is a critical task for artificial-intelligence workers to tackle. 

I feel that neither Suber's reflexivity nor Popkin's choice is more central than the 
other in defining the nature of law. In fact, they are intertwined. Suber stresses that 
people, in choosing which of two inconsistent aspects of a supposedly self-consistent 
system shall take precedence, often make their choice without explicit rules (since if the 
rules were spelled out, they would be susceptible to getting embroiled in a similar tangle 
once again, only at a higher level of abstraction). "Law can disregard logical difficulties 
and ground a solution on pragmatic rules, social policies, and legal doctrines", Suber has 
written [in a reply to Popkin]. "The effectiveness of policy, or what Popkin calls 'choice', 
in plowing under logical obstacles is not the answer to the question but the mystery to be 

Coming to grips with this contrast between explicit rules and implicit principles or 
guidelines is of great importance if one wants to characterize how flexible category 
recognition- "choice"-takes place, whether one is doing research in artificial intelligence, 
philosophizing about free will, or attempting to characterize the nature of law. Popkin, in 
fact, is rather charitable toward artificial-intelligence research, suggesting that it may 
some day yield clues, if not the key, to the mystery of choice. I think he is right about 
this. He may have failed to realize, however, that in any attempt to make a machine 
capable of choice, one runs headlong into the problem of inconsistencies, level-collisions, 
and reflexivity tangles, and for the following reason. 

All recognition programs are invariably modeled on what we know about 
perception in various modalities, such as hearing and sight. One thing we know 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


for sure is that in any modality, perception consists of many layers of processing, 
from the most primitive or "syntactic" levels, to the most abstract or "semantic" 
levels. The zeroing-in on the semantic category to which a given raw stimulus 
belongs is carried out not by a purely bottom-up (stimulus-driven) or purely top- 
down (category-driven) scheme, but rather by a mixture of them, in which 
hypotheses at various levels trigger the creation of new hypotheses or undermine 
the existence of already-existing hypotheses at other levels. This process of 
sprouting and pruning hypotheses is a highly parallel one, in which all the levels 
compete simultaneously for attention, like billboards or radio commercials or 
advertisements in the subway. 

Yet out of this seemingly anarchic chaos comes an integrated decision, in 
which the various levels gradually come to some kind of self-reinforcing 
agreement. If a firm decision is to emerge from such a swirl of conflicting claims, 
there must be some kind of mental scheduler, something that functions like Robert's 
Rules of Order, letting various levels have the floor, scheduling collective actions 
such as votes, overriding or tabling motions, and so on. In fact, to the best of our 
knowledge, this is the heart of the perceptual process. But this is the very place 
where reflexivity tangles crop up with a vengeance! 

Any perception program has various levels of "inner sanctum"-that is, levels 
of untouchability of its data structures. (These structures include not only the 
current hypotheses, but also deeper, more permanent aspects of the program itself, 
such as the ways it weights various pieces of evidence, the rules by which it sorts 
out conflicts, the priority rules of its scheduler, and-of course -the information 
about the untouchability of levels!) Now, for the ultimate in flexibility, none of 
these levels should be totally untouchable (although that degree of flexibility may 
be unattainable), but obviously some levels should be less touchable than others. 
Therefore any recognition program must have at its core a tiered structure precisely 
like that of government (or that of the rules of Nomic), in which there are levels 
that are "easily mutable", "moderately mutable", "almost mutable", and so on. The 
structure of a recognition program-a "choice" program-is seen inevitably to be 
riddled with reflexivity. 

The point of all this is that the very reflexivity issues that Popkin considers 
to be merely amusing sideshows in law are actually deeply embroiled in what he 
sees as the meat of the matter, namely the question of how category recognition- 
discerning the essence of something- works. For that reason, I found Suber's game 
not merely amusing but philosophically provocative as well. In fact, I consider the 
intertwined study of reflexivity and recognition, using the fresh methods of the 
emerging discipline of cognitive science, to be of great interest and importance for 
the light it may shed on the ancient philosophical problems of mind, free will, and 
identity-not to mention those of the philosophy of law. 

* * * 

It occurs to me that the message of my letter to Popkin could be put in a nutshell 
this way: To get flexible cognition, concentrate on reflexivity and recognition. Some of 
these ideas will come up again, more specifically in the context of artificial intelligence, 
in Chapters 23 and 24. 

Nomic: A Self-Modifying Game Based on Reflexivity in Law 


Section II. 
Sense and Society 

World Views in Collision: 


Section 11: 
Sense and Society 

Another broad theme of this book is introduced in the four chapters comprising 
this section: the harm that occurs when vast numbers of people accept without reflection 
the words, sayings, ideas, fads, styles, and tastes paraded in front of them by 
indiscriminate media and popular myth. Our society does a rather poor job of making us 
aware of, let alone interested in, the nature of common sense, the hidden assumptions that 
permeate thought, the complex mechanisms of sensory perception and category systems, 
the will to believe, the human tendency toward gullibility, the most typical flaws in 
arguments, the statistical inferences we make unconsciously, the vastly different temporal 
and spatial scales on which one can look at the universe, the many filters through which 
one can perceive and conceptualize people and events, and so on. The resulting 
deceptions, delusions, confusions, ignorances, and fears can lead to many disquieting 
social consequences, such as mildly or absurdly wasteful spending of funds, blatant or 
subtle discrimination against groups, and local or global apathy about the current state 
and momentum of the world. Of course everyone labors under some delusions, avoids 
certain kinds of thoughts, has an overly closed mind on this or that subject. What, 
however, are the consequences when this is multiplied by hundreds or thousands of 
millions, and all the small pieces are woven together into a vast fabric? What does a 
carpet woven from the incomplete understandings and ignorances of five billion sentient 
beings look like from afar-and where is this flying carpet headed? 

World Views in Collision: 



World Views in Collision: 

The Skeptical Inquirer 
versus the National Enquirer 

February. 1982 

Baffled Investigators and Educators Disclose ... 

A Cross between Human Beings and Plants ... 
PEOPLE ... Bizarre Creatures Could Do Anything 
You Want 

Alien from Space Shares Woman's Mind and Body, 
Hypnosis Reveals 

-Headlines from the National Enquirer 

Did the child you once were ever wonder why the declarative sentences in comic 
books always ended with exclamation points? Were all those statements really that 
startling? Were the characters saying them really that thrilled? Of course not! Those 
exclamation points were a psychological gimmick put there purely for the sake of 
appearance, to give the story more pizzazz ! 

The National Enquirer, one of this country's yellowest and purplest journalistic 
institutions, uses a similar gimmick! Whenever it prints a headline trumpeting the 
discovery of some bizarre, hitherto unheard-of phenomenon, instead of ending it with an 
exclamation point, it ends it (or begins it) with a reference to "baffled investigators", 
"bewildered scientists", or similarly stumped savants! It is an ornament put there to make 
the story seem to have more credibility! 

Or is it? What do the editors really want? That the story appear credible 

World Views in Collision: 


or that it appear incredible? It seems they want it both ways: they want the story to sound 
as outlandish as possible and yet they want it to have the appearance of authenticity. 
Their ideal headline should thus embody a contradiction: impossibility coupled with 
certainty. In short, confirmed nonsense. 

What is one to make of headlines like those printed above? Or of articles about 
plants that sing in Japanese, and calculating cacti? Or of the fact that this publication is 
sold by the millions every week in grocery stores, and that people gobble up its stories as 
voraciously as they do potato chips? Or of the fact that when they are through with it, 
they can turn to plenty of other junk food for thought, such as the National Examiner, the 
Star, the Globe, and, perhaps the most lurid of the lot, the Weekly World News? What is 
one to think? For that matter, what are Martians to think? (See Figure 5-1.) 

FIGURE 5-1. A Martian's reaction to a tabloid article. Note the complex diacritical marks 
of the Martian language, regrettably unavailable on most Terran typesetting machines. 
[Photograph by David J. Moser. ] 

World Views in Collision: 


Naturally, one's first reaction is to chuckle and dismiss such stories as silly. But how do 
you know they are silly? Do you also think that is a silly question? What do you think 
about articles printed in Scientific American? Do you trust them? What is the difference? 
Is it simply a difference in publishing style? Is the tabloid format, with its gaudy pictures 
and sensationalistic headlines, enough to make you distrust the National Enquirer? But 
wait a minute-isn't that just begging the question? What kind of argument is it when you 
use the guilty verdict as part of the case for the prosecution? What you need is a way of 
telling objectively what you mean by "gaudy" or "sensationalistic" -and that could prove 
to be difficult. 

And what about the obverse of the coin? Is the rather dignified, traditional format 
of Scientific American-its lack of photographs of celebrities, for example- what convinces 
you it is to be trusted? If so, that is a pretty curious way of making decisions about what 
truth is. It would seem that your concept of truth is closely tied in with your way of 
evaluating the "style" of a channel of communication-surely quite an intangible notion! 

Having said that, I must admit that I, too, rely constantly on quick assessments of 
style in my attempt to sift the true from the false, the believable from the unbelievable. 
(Quickness is of the essence, like it or not, because the world does not allow infinite time 
for deliberation.) I could not tell you what criteria I rely on without first pondering for a 
long time and writing many pages. Even then, were I to write the definitive guide (How 
to Tell the True from the False by Its Style of Publication), it would have to be published 
to do any good; and its title, not to mention the style it was published in, would probably 
attract a few readers, but would undoubtedly repel many more. There is something 
disturbing about that thought. 

There is something else disturbing here. Enormous numbers of people are taken 
in, or at least beguiled and fascinated, by what seems to me to be unbelievable hokum, 
and relatively few are concerned with or thrilled by the astounding-yet true-facts of 
science, as put forth in the pages of, say, Scientific American. I would proclaim with 
great confidence that the vast majority of what that magazine prints is true-yet my ability 
to defend such a claim is weaker than I would like. And most likely the readers, authors, 
and editors of that magazine would be equally hard pressed to come up with cogent, 
nontechnical arguments convincing a skeptic of this point, especially if pitted against a 
clever lawyer arguing the contrary. How come Truth is such a slippery beast? 

* * * 

Well, consider the very roots of our ability to discern truth. Above all (or perhaps 
I should say "underneath all"), common sense is what we depend on -that crazily elusive, 
ubiquitous faculty we all have, to some degree or other. But not to a degree such as 
"Bachelor's" or "Ph.D.". No, unfortunately, universities do not offer degrees in Common 
Sense. There 

World Views in Collision: 


are not even any Departments of Common Sense! This is, in a way, a pity. 

At first, the notion of a Department of Common Sense sounds ludicrous. Given 
that common sense is common, why have a department devoted to it? My answer would 
be quite simple: In our lives we are continually encountering strange new situations in 
which we have to figure out how to apply what we already know. It is not enough to have 
common sense about known situations; we need also to develop the art of extending 
common sense to apply to situations that are unfamiliar and beyond our previous 
experience. This can be very tricky, and often what is called for is common sense in 
knowing how to apply common sense: a sort of "meta-level" common sense. And this 
kind of higher-level common sense also requires its own meta-level common sense. 
Common sense, once it starts to roll, gathers more common sense, like a rolling snowball 
gathering ever more snow. Or, to switch metaphors, if we apply common sense to itself 
over and over again, we wind up building a skyscraper. The ground floor of this structure 
is the ordinary common sense we all have, and the rules for building new floors are 
implicit in the ground floor itself. However, working it all out is a gigantic task, and the 
result is a structure that transcends mere common sense. 

Pretty soon, even though it has all been built up from common ingredients, the 
structure of this extended common sense is quite arcane and elusive. We might call the 
quality represented by the upper floors of this skyscraper "rare sense"; but it is usually 
called "science". And some of the ideas and discoveries that have come out of this 
originally simple and everyday ability defy the ground floor totally. The ideas of 
relativity and quantum mechanics are anything but commonsensical, in the ground-floor 
sense of the term! They are outcomes of common sense self-applied, a process that has 
many unexpected twists and gives rise to some unexpected paradoxes. In short, it 
sometimes seems that common sense, recursively self-applied, almost undermines itself. 

Well, truth being this elusive, no wonder people are continually besieged with 
competing voices in print. When I was younger, I used to believe that once something 
had been discovered, verified, and published, it was then part of Knowledge: definitive, 
accepted, and irrevocable. Only in unusual cases, so I thought, would opposing claims 
then continue to be published. To my surprise, however, I found that the truth has to fight 
constantly for its life! That an idea has been discovered and printed in a "reputable 
journal" does not ensure that it will become well known and accepted. In fact, usually it 
will have to be rephrased and reprinted many different times, often by many different 
people, before it has any chance of taking hold. This is upsetting to an idealist like me, 
someone more disposed to believe in the notion of a monolithic and absolute truth than in 
the notion of a pluralistic and relative truth (a notion championed by a certain school of 
anthropologists and sociologists, who un-self-consciously insist "all systems of belief are 
equally valid", seemingly without realizing that this dogma of relativism 

World Views in Collision: 


not only is just as narrow-minded as any other dogma, but moreover is unbelievably 
wishy-washy!). The idea that the truth has to fight for its life is a sad discovery. The idea 
that the truth will not out, unless it is given a lot of help, is pretty upsetting. 

* * * 

A question arises in every society: Is it better to let all the different voices battle it 
out, or to have just a few "official" publications dictate what is the case and what is not? 
Our society has opted for a plurality of voices, for a "marketplace of ideas", for a 
complete free-for-all of conflicting theories. But if things are this chaotic, who will 
ensure that there is law and order? Who will guard the truth? The answer (at least in part) 
is: CSICOPwill! 

CSICOP? Who is CSICOP? Some kind of cop who guards the truth? Well, that's 
pretty close. "CSICOP" stands for "Committee for the Scientific Investigation of Claims 
of the Paranormal— a rather esoteric title for an organization whose purpose is not so 
esoteric: to apply common sense to claims of the outlandish, the implausible, and the 

Who are the people who form CSICOP and what do they do together? The 
organization was the brainchild of Paul Kurtz, professor of philosophy at the State 
University of New York at Buffalo, who brought it into being because he thought there 
was a need to counter the rising tide of irrational beliefs and to provide the public with a 
more balanced treatment of claims of the paranormal by presenting the dissenting 
scientific viewpoint. Among the early members of CSICOP were some of America's most 
distinguished philosophers (Ernest Nagel and Willard Van Orman Quine, for example) 
and other colorful combatants of the occult, such as psychologist Ray Hyman, magician 
James Randi, and someone whom readers of this column may have heard of: Martin 
Gardner. In the first few meetings, it was decided that the committee's principal function 
would be to publish a magazine dedicated to the subtle art of debunking. Perhaps 
"debunking" is not the term they would have chosen, but it fits. The magazine they began 
to publish in the fall of 1976 was called The Zetetic, from the Greek for "inquiring 

As happens with many fledgling movements, a philosophical squabble developed 
between two factions, one more "relativist" and unjudgmental, the other more firmly 
opposed to nonsense, more willing to go on the offensive and to attack supernatural 
claims. Strange to say, the open-minded faction was not so open-minded as to accept the 
opposing point of view, and consequently the rift opened wider. Eventually there was a 
schism. The relativist faction (one member) went off and started publishing his own 
journal, the Zetetic Scholar, in which science and pseudo-science coexist happily, while 
the larger faction retained the name "CSICOP" and changed the title of its journal to the 
Skeptical Inquirer. 

In a word, the purpose of the Skeptical Inquirer is to combat nonsense. It 

World Views in Collision: 


does so by recourse to common sense, and as much as possible by recourse to the ground 
floor of the skyscraper of science-the common type of common sense. This is by no 
means always possible, but it is the general style of the magazine. This means it is 
accessible to anyone who can read English. It does not require any special knowledge or 
training to read its pages, where nonsensical claims are routinely smashed to smithereens. 
(Sometimes the claims are as blatantly silly as the headlines at the beginning of this 
article, sometimes much subtler.) All that is required to read this maverick journal is 
curiosity about the nature of truth: curiosity about how truth defends itself (through its 
agent CSICOP) against attacks from all quarters by unimaginably imaginative theorizers, 
speculators, eccentrics, crackpots, and out-and-out fakers. 

The journal has grown from its original small number of subscribers to roughly 
7,500-a David, compared with the Goliaths mentioned above, with their circulations in 
the millions. Its pages are filled with lively and humorous writing- the combat of ideas in 
its most enjoyable form. By no means is this journal a monolithic voice, a mouthpiece of 
a single dogma. Rather, it is itself a marketplace of ideas, strangely enough. Even people 
who wield the tool of common sense with skill may do so with different styles, and 
sometimes they will disagree. 

There is something of a paradox involved in the editorial decisions in such a 
magazine. After all, what is under debate here is, in essence, the nature of correct 
arguments. What should be accepted and what shouldn't? To caricature the situation, 
imagine the editorial dilemmas that would crop up for journals with titles such as Free 
Press Bulletin, The Open Mind, or Editorial Policy Newsletter. What letters to the editor 
should be printed? What articles? What policy can be invoked to screen submitted 

These are not easy questions to answer. They involve a paradox, a tangle in which 
the ideas being evaluated are also what the evaluations are based on. There is no easy 
answer here! There is no recourse but to common sense, that rock-bottom basis of all 
rationality. And unfortunately, we have no foolproof algorithm to uniquely characterize 
that deepest layer of -I rationality,, nor are we likely to come up with one soon. The 
ability to use common sense-no matter how much light is shed on it by psychologists or 
philosophers-will probably forever remain a subjective art more than an objective 
science. Even when experimental epistemologists, in their centuries-long quest for 
artificial intelligence, have at last made a machine that thinks, its common sense will 
probably be just as instinctive and fallible and stubborn as ours. Thus at its core, 
rationality will always depend on inscrutables: the simple, the elegant, the intuitive. This 
weird paradox has existed throughout intellectual history, but in our information-rich 
times it seems particularly troublesome. 

Despite these epistemological puzzles, which seem to be intimately connected 
with its very reason for existence, the Skeptical Inquirer is flourishing and provides a 
refreshing antidote to the jargon-laden journals 

World Views in Collision: 


of science, which often seem curiously irrelevant to the concerns of everyday life. In that 
one way, the Inquirer resembles the scandalous tabloids. 

The list of topics covered in the seventeen issues that have appeared so far is 
remarkably diverse. Some topics have arisen only once, others have come up regularly 
and been discussed from various angles and at various depths. Some of the more 
commonly discussed topics are: 

ESP (extra-sensory perception) * telekinesis (using mental power to influence 
events at a distance) * astrology * biorhythms * Bigfoot * the Loch Ness monster * 
UFO's (unidentified flying objects) * creationism * telepathy * remote viewing * 
clairvoyant detectives who allegedly solve crimes * the Bermuda (and other) 
triangles * "thoughtography" (using mental power to create images on film) * the 
supposed extraterrestrial origin of life on the earth * Carlos Castaneda's mystical 
sorcerer "Don Juan" * pyramid power * psychic surgery and faith healing * 
Scientology * predictions by famous "psychics" spooks and spirits and haunted 
houses * levitation * palmistry and mind reading * unorthodox anthropological 
theories * plant perception * perpetual-motion machines * water witching and other 
kinds of dowsing * bizarre cattle mutilations 

When I contemplate the length of this list, I am quite astonished. Before I ever subscribed 
to the magazine, I had heard of almost all these items and was skeptical of most of them, 
but I had never seen a frontal assault mounted against so many paranormal claims at 
once. And I have only scratched the surface of the list of topics, because the ones listed 
above are regulars! Imagine how many topics are treated at shorter length. 

There are quite a few frequent contributors to this iconoclastic journal, such as 
James Randi, who is truly prolific. Among others are aeronautics writer Philip J. Klass, 
UFO specialist James E. Oberg, writer Isaac Asimov, CSICOP's founder (and current 
director) Paul Kurtz, psychologist James Alcock, educator Elmer Krai, anthropologist 
Laurie Godfrey, science writer Robert Sheaffer, sociologist William Sims Bainbridge, 
and many others. And the magazine's editor, Kendrick Frazier, a free-lance science writer 
by trade, periodically issues eloquent and mordant commentaries. 

* * * 

I know of no better way to impart the flavor of the magazine than to quote a few 
selections from articles. One of my favorite articles appeared in the second issue 
(Spring/Summer, 1977). It is by psychologist Ray Hyman (who, incidentally, like many 
other authors in the Skeptical Inquirer, is a talented magician) and is titled "Cold 
Reading: How to Convince Strangers that You Know All About Them". 

It begins with a discussion of a course Hyman taught about the various ways 
people are manipulated. Hyman states: 

World Views in Collision: 


I invited various manipulators to demonstrate their techniques-pitchmen, 
encyclopedia salesmen, hypnotists, advertising experts, evangelists, confidence 
men and a variety of individuals who dealt with personal problems. The techniques 
which we discussed, especially those concerned with helping people with their 
personal problems, seem to involve the client's tendency to find more meaning in 
any situation than is actually there. Students readily accepted this explanation when 
it was pointed out to them. But I did not feel that they fully realized just how 
pervasive and powerful this human tendency to make sense out of nonsense really 

Then Hyman describes people's willingness to believe what others tell them about 
themselves. His "golden rule" is: "To be popular with your fellow man, tell him what he 
wants to hear. He wants to hear about himself. So tell him about himself. But not what 
you know to be true about him. Oh, no! Never tell him the truth. Rather, tell him what he 
would like to be true about himself!" As an example, Hyman cites the following passage 
(which, by an extraordinary coincidence, was written about none other than you, dear 

Some of your aspirations tend to be pretty unrealistic. At times you are extroverted, 
affable, sociable, while at other times you are introverted, weary, and reserved. You 
have found it unwise to be too frank in revealing yourself to others. You pride yourself 
on being an independent thinker and do not accept others' opinions without 
satisfactory proof. You prefer a certain amount of change and variety, and become 
dissatisfied when hemmed in by restrictions and limitations. At times you have serious 
doubts as to whether you have made the right decision or done the right thing. 
Disciplined and controlled on the outside, you tend to be worrisome and insecure on 
the inside. 

Your sexual adjustment has presented some problems for you. While you have 
some personality weaknesses, you are generally able to compensate for them. You 
have a great deal of unused capacity which you have not turned to your advantage. 
You have a tendency to be critical of yourself. You have a strong need for other 
people to like you and for them to admire you. 

Pretty good fit, eh? Hyman comments: 

The statements in this stock spiel were first used in 1948 by Bertram Forer in a 
classroom demonstration of personal validation. He obtained most of them from a 
newsstand astrology book. Forer's students, who thought the sketch was uniquely 
intended for them as a result of a personality test, gave the sketch an average rating of 
4.26 on a scale of (poor) to 5 (perfect). As many as 16 out of his 39 students (41 
percent) rated it as a perfect fit to their personality. Only five gave it a rating below 4 
(the worst being a rating of 2, meaning "average"). Almost 30 years later students give 
the same sketch an almost identical rating as a unique description of themselves. 

A particularly delicious feature is the thirteen-point recipe that Hyman gives for 
becoming a cold reader. Among his tips are these: "Use the 

World Views in Collision: 


technique of 'fishing' (getting the subject to tell you about himself or herself, then 
rephrasing it and feeding it back); always give the impression that you know more than 
you are saying; don't be afraid to flatter your subject every chance you get." This cynical 
recipe for becoming a character reader is presented by Hyman in considerable detail, 
presumably not to convert readers of the article into charlatans and fakers, but to show 
them the attitude of the tricksters who do such manipulations. Hyman asks: 

Why does it work so well? It does not help to say that people are gullible or 
suggestible. Nor can we dismiss it by implying that some individuals are just not 
sufficiently discriminating or lack sufficient intelligence to see through it. Indeed, one 
can argue that it requires a certain degree of intelligence on the part of a client for the 
reading to work well .... We have to bring our knowledge and expectations to bear in 
order to comprehend anything in our world. In most ordinary situations, this use of 
context and memory enables us to correctly interpret statements and supply the 
necessary inferences to do this. But this powerful mechanism can go astray in 
situations where there is no actual message being conveyed. Instead of picking up 
random noise, we still manage to find meaning in the situation. So the same system 
that enables us to creatively find meanings and to make new discoveries also makes us 
extremely vulnerable to exploitation by all sorts of manipulators. In the case of the 
cold reading, the manipulator may be conscious of his deception; but often he too is a 
victim of personal validation. 

Hyman knows what he's talking about. Many years ago, he was convinced for a 
time that he himself had genuine powers to read palms, until one day when he tried 
telling people the exact opposite of what their palms told him and saw that they still 
swallowed his line as much as ever! Then he began to suspect that the plasticity of the 
human mind-his own particularly-was doing some strange things. 

* * * 

At the beginning of each issue of the Skeptical Inquirer is a feature called "News 
and Comment". It covers such things as the latest reports on current sensational claims, 
recently broadcast television shows for and against the paranormal, lawsuits of one sort 
or another, and so on. One of the most amusing items was the coverage in the Fall 1980 
issue of the "Uri Awards", given out by James Randi (on April 1, of course) to various 
deserving souls who had done the most to promote gullibility and irrational beliefs. Each 
award consists of "a tastefully bent stainless-steel spoon with a very transparent, very 
flimsy base". Award winners were notified, Randi explained, by telepathy, and were "free 
to announce their winning in advance, by precognition, if they so desired". Awards were 
made in four categories: Academic ("to the scientist who says the dumbest thing about 
parapsychology"), Funding ("to the funding organization that awards the 

World Views in Collision: 


most money for the dumbest things in parapsychology"), Performance ("to the psychic 
who, with the least talent, takes in the most people"), and Media ("to the news_ 
Qrganization that supports the most outrageous claims of the paranormalists"). 

The nature of coincidences is a recurrent theme in discussions of the paranormal. I 
vividly remember a passage in a lovely book by Warren Weaver titled Lady Luck: The 
Theory of Probability, in which he points out that in many situations, the most likely 
outcome may well be a very unlikely event (as when you deal hands in bridge, where 
whatever hand you get is bound to be extraordinarily rare). A similar point is made in the 
following excerpt from a recent book by David Marks and Richard Kammann titled The 
Psychology of the Psychic (from which various excerpts were reprinted in one issue of 
the Skeptical Inquirer): 

'Koestler's fallacy' refers to our general inability to see that unusual events are 
probable in the long run .... It is a simple deduction from probability theory that an 
event that is very improbable in a short run of observations becomes, nevertheless, 
highly probable somewhere in a long run of observations .... We. call it 'Koestler's 
fallacy' because Arthur Koestler is the author who best illustrates it and has tried to 
make it into a scientific revolution. Of course, the fallacy is not unique to Koestler but 
is widespread in the population, because there are several biases in human perception 
and judgment that contribute to this fallacy. 

First, we notice and remember matches, especially oddmatches, whenever they 
occur. (Because a psychic anecdote first requires a match, and, second, an oddity 
between the match and our beliefs, we call these stories oddmatches. This is 
equivalent to the common expression, an "unexplained coincidence".) Second, we do 
not notice non-matches. Third, our failure to notice nonevents creates the short-run 
illusion that makes the oddmatch seem improbable. Fourth, we are poor at estimating 
combinations of events. Fifth, we overlook the principle of equivalent oddmatches, 
that one coincidence is as good as another as far as psychic theory is concerned. 

An excellent example of people not noticing non-events is provided by the failed 
predictions of famed psychics (such as Jeane Dixon). Most people never go back to see 
how the events bore out the predictions. The Skeptical Inquirer, however, has a tradition 
of going back and checking. As each year concludes, it prints a number of predictions 
made by various psychics for that year and evaluates their track records. In the Fall 1980 
issue, the editors took the predictions of 100 "top psychics", tabulated them, listed the top 
twelve in order of frequency, and left it to the reader to assess the accuracy of psychic 
visions of the future. The No. I prediction for 1979 (made by 86 psychics) was "Longer 
lives will be had for almost everyone as aging is brought under control." No. 2 (85 
psychics) was "There will be a major breakthrough in cancer, which will almost totally 
wipe out the disease." No. 3 (also 85 psychics) was "There will be an astonishing 
spiritual rebirth and a return to the old values." And so on. No. 6 (81 psychics) was 
"Contact will 

World Views in Collision: 


be made with aliens from space who will give us incredible knowledge." The last four, 
interestingly, all involved celebrities: Frank Sinatra was supposed to become seriously ill, 
Edward Kennedy to become a presidential candidate, Burt Reynolds to marry, and 
Princess Grace to return to this country to resume a movie career. Hmm ... 

There is something pathetic, even desperate, about these predictions. One sees 
only too clearly the similarity of the tabloids (which feature these predictions) to the 
equally popular television shows like Fantasy Island and Star Trek. The common 
denominator is escape from reality. This point is well made in an article by William Sims 
Bainbridge in the Fall 1979 issue, on television pseudo-documentaries on the occult and 
pseudo-science. He characterizes those shows as resembling entertainment shows in 
which fact and fantasy are not clearly distinguised. His name for this is "wishfulfillment 

Perhaps a key to why so much fantasy is splashed across the tabloids and 
splattered across our living-room screens lies here. Perhaps we all have a desire to dilute 
reality with fantasy, to make reality seem simpler and more aligned with what we wish it 
were. Perhaps for us all, the path of least resistance is to allow reality and fantasy to run 
together like watercolors, blurring our vision but making life more pastel-like: in a word, 
softer. Yet at the same time, perhaps all of us have the potential capacity and even the 
desire to sift sense from nonsense, if only we are introduced to the distinction in a 
sufficiently vivid and compelling manner. 

* * * 

But how can this be done? In the "News and Comment" section of the Spring 
1980 issue, there was an item about a lively anti-pseudo-science traveling comedy lecture 
act by one "Captain Ray of Light"-actually Douglas F. Stalker, an associate professor of 
philosophy at the University of Delaware. The article quotes Stalker on his "comical 
debunking show" (directed at astrology, biorhythms, numerology, UFO's, pyramid 
power, psychic claims, and the like) as follows: 

For years I lectured against them in a serious way, with direct charges at their silly 
theories. These direct attacks didn't change many minds, and so I decided to take an 
indirect approach. If you can't beat them, join them. And so I did, in a manner of 
speaking. I constructed some plainly preposterous pseudosciences of my own and 
showed that they were just like astrology and the others. I also explained how you 
could construct more of these silly theories. By working from the inside out, more 
students came to see how pseudo these pseudosciences are .... And that is the audience 
I try to reach: the upcoming group of citizens. My show reaches them in the right way, 
too. It leaves a lasting impression; it wins friends and changes minds. 

I am delighted to report that Stalker welcomes new bookings. He can be 

World Views in Collision: 


reached at the Department of Philosophy, University of Delaware, Newark, Delaware 

One of the points Stalker makes is that no matter how eloquent a lecture may be, 
it simply does not have the power to convince that experience does. This point has been 
beautifully demonstrated in a study made by Barry Singer and Victor A. Benassi of the 
Psychology Department of California 

State University at Long Beach. These two investigators set out to determine the 
effect on first-year psychology students of seemingly paranormal effects created in the 
classroom by an exotically dressed magician. Their findings were reported in the Winter 
1980/81 issue of the Skeptical Inquirer in a piece titled "Fooling Some of the People All 
of the Time". 

In two of the classes, the performer (Craig Reynolds) was introduced as a 
graduate student "interested in the psychology of paranormal or psychic abilities, [who 
has] been working on developing a presentation of his psychic abilities". The instructor 
also explicitly stated, "I'm not convinced personally of Craig's or anyone else's psychic 
abilities." In two other classes, Craig was introduced as a graduate student "interested in 
the psychology of magic and stage trickery, [who has] been working on developing a 
presentation of his magic act". The authors emphasize that all the stunts 

Craig performed are "easy amateur tricks that have been practiced for centuries 
and are even explained in children's books of magic". After the act, the students were 
asked to report their reactions. Singer and Benassi received two jolts from the reports. 
They write: 

First .... in both the "magic" and the "psychic" classes, about two-thirds of the 
students clearly believed Craig was psychic. Only a few students seemed to believe 
the instructor's description of Craig as a magician, in the two classes where he was 
introduced as such. Secondly, psychic belief was not only prevalent; it was strong and 
loaded with emotion. A number of students covered their papers with exorcism terms 
and exhortations against the Devil. In the psychic condition, 18 percent of the students 
explicitly expressed fright and emotional disturbance. Most expressed awe and 

We were present at two of Craig's performances and witnessed some extreme 
behavior. By the time Craig was halfway through the "bending" chant [part of a stunt 
where he bent a stainless-steel rod], the class was in a terribly excited state. Students 
sat rigidly in their chairs, eyes glazed and mouths open, chanting together. When the 
rod bent, they gasped and murmured. After class was dismissed, they typically sat still 
in their chairs, staring vacantly or shaking their heads, or rushed excitedly up to Craig, 
asking him how they could develop such powers. We felt we were observing an 
extraordinarily powerful behavioral effect. If Craig had asked the students at the end 
of his act to tear off their clothes, throw him money, and start a new cult, we believe 
some would have responded enthusiastically. Obviously, something was going on here 
that we didn't understand. 

After this dramatic presentation, the classes were told they had only been 

World Views in Collision: 


seeing tricks. In fact, two more classes were given the same presentation, with the added 
warning: "In his act, Craig will pretend to read minds and demonstrate psychic abilities, 
but Craig does not really have psychic abilities, and what you'll be seeing are really only 
tricks." Still, despite this strong initial disclaimer, more than half the students in these 
classes believed Craig was psychic after seeing his act. "This says either something about 
the status of university instructors with their students or something about the strange 
pathways people take to occult belief", Singer and Benassi observe philosophically. Now 
comes something astonishing. 

The next question asked was whether magicians could do exactly what Craig 
did. Virtually all the students agreed that magicians could. They were then asked if 
they would like to revise their estimate of Craig's psychic abilities in the light of this 
negative information that they themselves had furnished. Only a few did, reducing the 
percentage of students believing that Craig had psychic powers to 55 percent. 

Next the students were asked to estimate how many people who performed 
stunts such as Craig's and claimed to be psychic were actually fakes using magician's 
tricks. The consensus was that at least three out of four "psychics" were in fact frauds. 
After supplying this negative information, they were again asked if they wished to 
revise their estimate of Craig's psychic abilities. Again, only a few did, reducing the 
percentage believing that Craig had psychic powers to 52 percent. 

Singer and Benassi muse: 

What does all this add up to? The results from our pen-and-pencil test suggest that 
people can stubbornly maintain a belief about someone's psychic powers when they 
know better. It is a logical fallacy to admit that tricksters can perform exactly the same 
stunts as real psychics and to estimate that most so-called psychics are frauds-and at 
the same time to maintain with a fair degree of confidence that any given example 
(Craig) is psychic. Are we humans really that foolish? Yes. 

* * * 

A few years ago, Scot Morris (now a senior editor at Omni magazine in charge of 
its "Games" department) carried out a similar experiment on a first-year psychology class 
at Southern Illinois University, which he wrote up in the Spring 1980 issue of the 
Skeptical Inquirer. First, Morris assessed his students' beliefs in ESP by having them fill 
out a questionnaire. Then a colleague performed an "ESP demonstration", which Morris 
calls "frighteningly impressive". 

After this powerful performance, Morris tried to "deprogram" his students. He 
had two weapons at his disposal. One is what he calls "dehoaxing". This process, just 
three minutes long, consisted in a revelation of how two of the three tricks worked, 
together with a confession 

World Views in Collision: 


that the remaining one of the baffling stunts was also a trick. ' B"t,- said Morris, "I'm not 
going to say how it was done, because I want you to experience the feeling that, even 
though you can't explain something, that doesn't make it supernatural." The other weapon 
was a 50-minute anti-ESP lecture, in which secrets of professional mind readers were 
revealed, commonsense estimates of probabilities of "oddmatches" were discussed, 
"scientific" studies of ESP were shown to be questionable for various statistical and 
logical reasons, and some other everyday reasons were adduced to cast ESP's reality into 
strong doubt. 

After the performance, only half of the classes were "dehoaxed", but all of them 
heard the anti-ESP lecture. The students were then polled about the strength of their 
belief in various kinds of paranormal phenomena. It turned out that dehoaxed classes had 
a far lower belief in ESP than classes that had simply heard the anti-ESP lecture. The 
dehoaxed classes' average level of ESP belief dropped from nearly 6 (moderate belief) to 
about 2 (strong disbelief ), while the non-dehoaxed classes' average level dropped from 6 
to about 4 (slight disbelief ). As Morris summarizes this surprising result, "The dehoaxing 
experience was apparently crucial; a three-minute revelation that they had been fooled 
was more powerful than an hour-long denunciation of ESP in producing skepticism 
toward ESP." 

One of Morris' original interests in conducting this experiment was "whether the 
exercise would teach the students skepticism for ESP statements only, or a more general 
attitude of skepticism, as we had hoped. For example, would their experience also make 
them more skeptical of astrology, Ouija boards, and ghosts?" Morris did find a slight 
transfer of skepticism, and from it he concluded hopefully that "teaching someone to be 
skeptical of one belief makes him somewhat more skeptical of similar beliefs, and 
perhaps slightly more skeptical even of dissimilar beliefs." 

This question of transfer of skepticism is, to my mind, the critical one. It is of 
little use to learn a lesson if it always remains a lesson about particulars and has no 
applicability beyond the case in which it was first learned. What, for instance, would you 
say is "the lesson of the People's Temple incident in Jonestown"? Simply that one should 
never follow the Reverend Jim Jones to Guyana? Or more generally, that one should be 
wary of following any guru halfway across the world? Or that one should never follow 
anyone anywhere? Or that all cults are evil? Or that any belief in any kind of savior, 
human or divine, is crazy and dangerous? Or consider the recent convulsions in Iran. Is it 
likely that the fundamentalist "Moral Majority" Christians in America would see their 
own attitudes as parallel to those of fundamentalist Moslems whose fanaticism they 
abhor, and that they would thereby be led to reflect on their own behavior? I wouldn't 
hold my breath. At what level of generality is a lesson learned? What was "the lesson of 
Viet Nam"? Does it apply to any present political situations that the United States is 
facing, or that any country is facing? 

* * * 

World Views in Collision: 


Stalker's Captain Ray of Light expresses faith that by debunking his own 
"miniature" pseudo-sciences before audiences, he can transfer to people a more general 
critical ability-an ability to think more clearly about paranormal claims. But how true is 
this? There are untold believers in some types of paranormal phenomena who will totally 
ridicule other types. It is quite common to encounter someone who will scoff at the 
headlines in the National Enquirer while at the same time believing, say, that through 
Transcendental Meditation you can learn to levitate, or that astrological predictions come 
true, or that UFO's are visitors from other galaxies, or that ESP exists. I've heard many 
people express the following sort of opinion: "Most psychics, unfortunately, are frauds, 
which makes it all the more difficult for the genuine ones to be recognized." You even 
get believers in tricksters such as Uri Geller who say, "I admit he cheats some of the time, 
maybe even 90 percent of the time-but believe me, he has genuine psychic abilities!" 

If you are hunting for a signal in a lot of noise, and the more you look, the more 
noise you find, when is it reasonable to give up and conclude there is no signal there at 
all? On the other hand, sometimes there just might be a signal! The problem is, you don't 
want to jump too quickly to a negative generalization, especially if your feelings are 
based merely on some kind of guilt by association. After all, not everything published in 
the National Enquirer is false. (I had to look awfully hard, though, to locate something in 
its pages that I was sure is true!) The subtle art is in sensing just when to shift-in sensing 
when there is enough evidence. But for better or for worse, this is a subjective matter, an 
art that few journals heretofore have dealt with. 

The Skeptical Inquirer concerns itself with questions ranging from the ridiculous 
to the sublime, from the trivial to the profound. There are those who would say it is a big 
waste of time to worry about such drivel as ESP and other so-called paranormal effects, 
whereas others (such as myself) feel that anyone who is unable or unwilling to think hard 
about what distinguishes the scientific system of thinking from its many rival systems is 
not a devotee of truth at all, and furthermore that the spreading of nonsense is a 
dangerous trend that ought to be checked. 

In any case, the question arises whether the Skeptical Inquirer will ever amount to 
more than a tiny drop in a huge bucket. Surely its editors do not expect that someday it 
will be sold alongside the National Enquirer at supermarket checkout counters! Or, 
carrying this vision to an upside-down extreme, can you imagine a world where a 
debunking journal such as the Skeptical Inquirer (in tabloid form, of course) sold millions 
of copies each week at supermarkets (along with its many rivals), while one lone 
courageous voice of the occult came out four times a year (in a relatively staid format) 
and was sought out by a mere 7,500 readers? Where the many rival debunking tabloids 
were always to be found lying around in laundromats? It sounds like a crazy story fit for 
the pages of the National Enquirer! This ludicrous scenario serves to emphasize just what 
the hardy band at CSICOP is up against. 

World Views in Collision: 


What good does it do to publish their journal when only a handful of already- 
convinced anti-occult fanatics read it anyway? The answer is found in, among other 
places, the letters column at the back of each issue. Many people write in to say how vital 
the magazine has been to them, their friends, and their students. High-school teachers are 
among the most frequent writers of thank-you notes to the magazine's editors, but I have 
also seen enthusiastic letters from members of the clergy, radio talk-show hosts, and 
people in many other professions. 

I would hope that by now I have aroused enough interest on the part of readers 
that they might like to subscribe to at least one of the journals that I have discussed in 
these pages. In the spirit of open-mindedness and relativism, therefore, I hereby provide 
addresses for all three (in alphabetical order): 

National Enquirer 
Lantana, Florida 33464 

Skeptical Inquirer 

Box 229, Central Park Station Buffalo, New York 14215 
Zetetic Scholar 

Department of Sociology Eastern Michigan University Ypsilanti, 
Michigan 48197 

Of course, I would not dream of suggesting which one to subscribe to. Perhaps the most 
prudent course would be not to make any prejudgments, and to subscribe to all three. 

* * * 

Certainly one will never be able to empty the vast ocean of irrationality that all of 
us are drowning in, but the ambition of the Skeptical Inquirer has never been that heroic; 
it has been, rather, to be a steady buoy to which one could cling in that tumultuous sea. It 
has been to promote a healthy brand of skepticism in as many people as it can. As 
Kendrick Frazier said in one of his eloquent editorials, 

Skepticism is not, despite much popular misconception, a point of view. It is, 
instead, an essential component of intellectual inquiry, a method of determining the 
facts whatever they may be and wherever they might lead. It is a part of what we call 
common sense. It is a part of the way science works. 

All who are interested in the search for knowledge and the advancement of 
understanding, imperfect as those enterprises may be, should, it seems to me, support 
critical inquiry, whatever the subject and whatever the outcome. 

World Views in Collision: 


It is too bad that we should have to constantly defend truth against so many 
onslaughts from people unwilling to think, but, on the other hand, sloppy thought seems 
inevitable. It's just part of human nature. Come to think of it, didn't I read somewhere 
recently about how your average typical-type John or Jane Doe in the street uses only ten 
percent of his or her brains? Something like that! How come folks don't think harder and 
get more of those little brain cells going? Beats me! Talk about sloppy-it's downright 
boggling! ! Even the scientists are stumped! ! ! 

Post Scriptum 

In the April 1982 issue of Spektrum der llusenschaft (the German edition of 
Scientific American ), the translation of this column appeared. On the flip side of the 
page with the headline "Boy can see with his ears" (lunge kann mit den Ohren sehen) I 
found a short article whose headline ran "Learning to hear with your eyes" (Mit den 
Augen hdren lernen ). It's logical, I guess-hearing with your eyes does seem to be the flip 
side of seeing with your ears ! The article actually was about a machine for helping deaf 
people improve their speech with the aid of computer displays of their voices. 

It was remarkable to see how similar these flipped headlines were, and yet how 
totally different the articles were. The main difference was actually in tone. The National 
Enquirer article spoke of an event that supposedly had occurred and characterized it as 
baffling and beyond explanation; the Spektrum der Wissenschaft article mentioned a 
counterintuitive idea and explained how it might conceivably be realized, after a fashion. 
Note that Spektrum der Wissenschaft managed to grab my attention by exploiting the 
same device as the tabloids do: catch readers by blaring something paradoxical. To 
someone not firmly grounded in science, "hearing with your eyes" and "seeing with your 
ears" sound (and look!) about equally implausible. Indeed, even to someone who is 
scientifically educated, the two phrases sound about equally weird. More information is 
needed to flesh out the meanings. That information was provided in Spektrum der 
Wissenschaft, and turned the initially grabbing headline into a sensible notion. Such is 
usually not the case for articles in the tabloids. But for most readers, such a subtle 
distinction doesn't matter. 

This all goes to emphasize the claim at the beginning of this chapter about the 
trickiness of trying to pin down what truth is, and how deeply circular all belief systems 
are, no matter how much they try to be objective. In the end, rate of survival is the only 
difference between belief systems. This is a worrisome statement. It certainly worries me, 
at least. Still, I believe it. But scientists, I find, are not usually willing to see science itself 
as being rooted in an impenetrably murky swamp of beliefs and attitudes and perceptions. 
Most of them have never considered how it is that human perception and 

World Views in Collision: 


categorization underlie all that we take for granted in terms of common sense, and in 
more primordial ways that are so deeply embedded that we even find them hard to talk 
about. Such things as: how we break the world into parts, how we form mental 
categories, how we refine them certain times while blurring them other times, how 
experiences and categories are clustered associatively, how analogies guide our 
intuitions, how imagery works, how valid logic is and where it comes from, how we tend 
to favor simple statements over complex ones, and so on-all these are, for most scientists, 
nearly un-grapplable-with issues, and so they pay them no heed and continue with their 

The idea of "simplicity" is a real can of worms, for what is simple in one 
vocabulary can be enormously complex in another vocabulary-and vice versa. Does the 
sun rise in the mornings? Ninety-nine to one you use that geocentric phrase in your 
ordinary conversations, and geocentric imagery in your private thoughts. Yet we all 
"know" that the truth is different: the earth is really rotating on its axis and so the sun's 
motion is only apparent. Well, it may be news to you that general relativity says that all 
coordinate systems are equally valid-and that includes one from whose point of view all 
motion takes place with respect to a fixed, nonrotating earth. Thus Einstein tells us that 
Copernicus and Galileo were, after all, not any righter than Ptolemy and the Pope (score 
ten points for infallibility!). There is even, for each of us, a physically valid "egocentric" 
system of coordinates in which I am still and everything moves relative to me! I point this 
out to show that the truth is much shiftier and subtler than any simple picture can ever 
say. Scientists who oversimplify science distort reality as much as religious fanatics or 
pseudo-scientists do. The troubling truth is that there is no simple boundary line between 
nonsense and sense. (See Chapter 1 1). It is a lot hazier and blurrier and messier than even 
thoughtful people generally wish to admit. 

When I was a columnist in Scientific American, I got quite a lot of mail, including 
a sizable number of letters from what I might charitably term "fringe thinkers", or 
uncharitably term "crackpots". I built up large files of such letters in the hopes of 
someday writing an article about "crackpotism" and its detection. The hypothetical book 
How to Tell the True from the False by Its Style of Publication, which I jokingly referred 
to in the article as something that I might write, was therefore not entirely a joke. 

How can you discern which books you do want to read from those you don't? 
Answer: You have various levels of depth of evaluation, ranging from extremely brief 
and superficial tests to very deep and probing ones (i.e., where you actually do take the 
trouble to read the book to see what it says). In order to reach the final stage, (reading the 
book), you go through several very critical intermediate levels of analysis and scrutiny. I 
call this mechanism for filtering the "terraced scan". 

How do I decide which letters to read carefully, if I don't read them all carefully 
(to decide whether or not to read them carefully ...)? Answer: I apply the crudest, most 
"syntactic" stages of my terraced scanner and prune 

World Views in Collision: 


out the worst ones very quickly. Then I apply a slightly more refined stage of testing to 
the survivors, and prune out some more. And on it goes, until I am left with just a handful 
of truly provocative, significant letters. But if I had no such terraced- scan mechanism, I 
would be trapped in perpetual indecision, having no basis to decide to do anything, since 
I would need to evaluate every pathway in depth in order to decide whether or not to 
follow it. Should I take the bus to Kalamazoo today? Study out of a Smullyan book? 
Practice the piano? Read the latest New York Review of Books? Write an angry letter to 
someone in government? 

This question of the interaction of form and content fascinates me deeply. I do 
indeed believe that if one has the right "terraced scan" mechanisms, one can go very far 
indeed in separating the wheat from the chaff. Of course, one has to believe that there is 
such a distinction: that The Truth actually exists. And just what this Truth is is very hard 
to say. 

* * * 

To me, part of the challenge of Zen is very much akin to the challenge of the 
occult and of pseudo-science: the baffling inner consistency of a worldview totally 
antithetical to my own. What is also interesting is that each human being has a totally 
unique worldview, with its private contradictions and even small insanities. It is my 
belief, for instance, that inside every last one of us there is at least a small pocket of 
insanity: a kind of Achilles' heel that we try to avoid exposing to the world-and to 
ourselves. In his own personal way, Einstein was loony; in my own personal way, I am 
loony; and the same for you, dear lunatic ! 

In a way, therefore, to try to pursue the nature of ultimate truth is to enter a 
bottomless pit, filled with circular vipers of self -reference. One could liken CSICOP's job 
to that of the American Civil Liberties Union, which gets itself in all sorts of tangled 
loops because of its stance of defending radical belief systems. For instance, in an odd 
twist, its director, a former concentration camp inmate, found himself defending the 
rights of neo-Nazis to march down the streets of highly Jewish Skokie, Illinois, parading 
their banners advocating the extermination of all "inferior races". And what was worse 
for him was that as a consequence of his actions, the ACLU lost a significant portion of 
its membership. Patrick Henry spoke of "defending to the death your right to say it"-but 
does "it" include anything? Recipes for how to murder people? How to build atomic 
bombs? How to destroy the free press? Governments also face this sticky kind of issue. 
Can a government dedicated to liberty afford to let an organization dedicated to that 
government's downfall flourish? 

It always seems refreshing to see how magazines, in their letters columns, 
willingly publish letters highly critical of them. I say "seems", because often those letters 
are printed in pairs, both raking the magazine over the coals but from opposite directions. 
For example, a right-wing critic and a 

World Views in Collision: 


left-wing critic both chastise the magazine for leaning too far the wrong way. The upshot 
is of course that the magazine doesn't even have to say a thing in its own defense, for it is 
a kind of cliche that if you manage to offend both parties in a disagreement, you certainly 
must be essentially right! That is, the truth is supposedly always in the middle-a 
dangerous fallacy. 

Raymond Smullyan, in his book This Book Needs No Title, provides a perfect 
example of the kind of thing I am talking about. It is a story about two boys fighting over 
a piece of cake. Billy says he wants it all, Sammy says they should divide it equally. An 
adult comes along and asks what's wrong. The boys explain, and the adult says, "You 
should compromise-Billy gets three quarters, Sammy one quarter." This kind of story 
sounds ridiculous, yet it is repeated over and over in the world, with loudmouths and 
bullies pushing around meeker and fairer and kinder people. The "middle position" is 
calculated by averaging all claims together, outrageous ones as well as sensible ones, and 
the louder any claim, the more it will count. Politically savvy people learn this early and 
make it their credo; idealists learn it late and refuse to accept it. The idealists are like 
Sammy, and they always get the short end of the stick. 

Magazines often gain rather than lose by printing what amounts to severe 
criticism. This holds even if the critical letter is not matched by an equally critical letter 
from the other side, because if a magazine prints letters critical of it, it appears open- 
minded and willing to listen to criticism. Thus the opposition is co-opted and undercut. 

Another problem is that by shouting loud enough, advocates of any viewpoint can 
gain public attention. Sometimes the loudness comes from the large number of adherents 
of a particular point of view, sometimes it comes from the eloquence or charisma of a 
single individual, and sometimes it comes from the high status of one individual. A 
particularly salient example of this sort of thing is provided by the behavior of the Nixon 
"team" during the Watergate affair. There, they had the ability to manipulate the press 
and the public simply because they were in power. What no private individual would ever 
have been able to get away with for a second was done with the greatest of ease by the 
Nixon people. They shamelessly changed the rules as they wished, and for a long time 
got away with it. 

What does all this have to do with the Skeptical Inquirer? Plenty. Amidst the 
tumult and the shouting, where does the truth lie? What voices should one listen to? How 
can one tell which are credible and which are not? It might seem that the serious matters 
of life have precious little to do with the validity of horoscopes, the probability of 
reincarnation, or the existence of Bigfoot, but I maintain that susceptibility to bad 
arguments in one domain opens the door to being manipulated in another domain. A 
critical mind is critical on all fronts simultaneously, and it is vital to train people to be 
critical at an early stage. 

* * * 

World Views in Collision: 


The most serious piece of mail I received as a result of this column was from 
Marcello Truzzi, founder of the Zetetic Scholar. Truzzi wrote me as follows (somewhat 

I was greatly disturbed and disappointed to read your column because of its 
serious distortions about the character of the v schism' in CSICOP and the position and 
history of the Zetetic Scholar. Your article conveys the clear impression that Zetetic 
Scholar is somehow more sympathetic to pseudo-science, is more 'relativist' and 
'unjudgmental'. That is completely untrue .... 

I think you completely missed the issue between CSICOP and CSAR [Truzzi's 
Center for Scientific Anomalies Research-the organization behind Zetetic Scholar]. The 
term 'skeptic' has become unfortunately equated with disbelief rather than its proper 
meaning of nonbelief. That is, skepticism means the raising of doubts and the urging of 
inquiry. Zetetic Scholar very much stands for doubt and inquiry .... I view much of 
CSICOP activity as obstructing inquiry because it has prejudged many areas of inquiry 
by labeling them pseudo-scientific prior to serious inquiry. In other words, it is not 
judgment that I wish to avoid-quite the contrary-but prejudgment. 

The major problem is that CSICOP, in its fervor to debunk, has tended to lump 
the nonsense of the National Enquirer with the serious scientific research programs of 
what I call 'protosciences' (that is, serious but maverick scientists trying to play by the 
rules of science and get their claims properly tested and examined). By scoffing at all 
claims of the paranormal, CSICOP inhibits (through mockery) serious work on 

Zetetic Scholar tries to bring together pro to scientific proponents and responsible 
critics into rational dialogue .... The purpose is to advance science. 

My position is not a relativist one. I believe science does progress and is 
cumulative. But I do believe that skepticism must extend to all claims, including 
orthodox ones. Thus, before I condemn fortune tellers as doing social evil, I think the 
effects of their use need to be compared to the orthodox practitioners -psychiatrists and 
clinical psychologists. The simple fact is that much nonsense goes on within science that 
is at least as pseudo-scientific as anything going on in what we usually term pseudo- 
sciences .... 

I do not believe in most paranormal claims, but I refuse to close the door on 
discussion of them. The simple fact is that I think I have more confidence in science than, 
say, Martin Gardner does. For example, Martin resigned as a consulting editor for Zetetic 
Scholar when he was told that I planned to publish a 'stimulus' article asking for a 
reconsideration of the views of Velikovsky. [Immanuel Velikovsky is best known for his 
fantastic, fiery visions of the. evolution of the solar system and, among other things, a 
theory claiming that the earth, up until quite recently (in astronomical terms), was 
spinning in the other direction! He claimed that his views reconciled science and the 
Bible, and he published many books, perhaps the most famous of which is called Worlds 
in Collision. ] Martin was invited to comment, as were many critics of Velikovsky. But 
Martin felt that even considering Velikovsky seriously in Zetetic Scholar gave 
Velikovsky undeserved legitimacy, so Martin resigned. I happen to think Velikovsky is 
dead wrong, but I also think that he has not been given due process by his critics. I have 
confidence that honest discourse will reveal the errors and virtues (if any) in any esoteric 
scientific claim. I see nothing to be 

World Views in Collision: 


afraid of. I have full confidence in science as a self-correcting system. Some on CSICOP, 
like Martin, do not. 

This is only a small portion of Truzzi's letter, but it gets the idea across. All in all, 
Truzzi emphasized that his magazine serves a different purpose from the Skeptical 
Inquirer, and that I had not made it sufficiently clear what that purpose really is. I hope 
that readers can now understand what it is. My reply to Truzzi follows (also somewhat 

I have thought quite a bit about the issues you raise, and about the difference in 
tone, outlook, purpose, vision, etc., between Zetetic Scholar and the Skeptical Inquirer. I 
find myself more sympathetic than you are to the cause of out-and-out debunking. I am 
impatient with, and in fact rather hostile towards, the immense amount of nonsense that 
gets given a lot of undue credit because of human irrationality. It is like not dealing with 
someone very unpleasant in a group of people because you've been trained to be very 
tolerant and polite. But eventually there comes a point where somebody gets up and lets 
the unpleasant person v have it'-verbally or physically or however-maybe just escorts them 
out-and everyone then is relieved to be rid of the nuisance, even though they themselves 
didn't have the courage to do it. 

Admittedly, it's just an analogy, but to me, Velikovsky is just such an obnoxious 
person. And there are loads more. I simply don't feel they should be accorded so much 
respect. One shouldn't bend over backwards to be polite to genuinely offensive parties. I 
happen to feel that much of parapsychology has been afforded too much credibility. I feel 
that ESP and so on are incompatible with science for very fundamental reasons. In other 
words, I feel that they are so unlikely to be the case that people who spend their time 
investigating them really do not understand science well. And so I am impatient with 
them. Instead of welcoming them into scientific organizations, I would like to see them 
kicked out. 

Now this doesn't mean that I feel that debating about the reasons I find ESP (etc.) 
incompatible with science at a very deep level is worthless. Quite to the contrary: coming 
to understand how to sift the true from the false is exceedingly subtle and important. But 
that doesn't mean that all pretenders to truth should be accorded respect. 

It's a terribly complex issue. None of us sees the full truth on it. I am sorry if I did 
you a disservice by describing your magazine as I did. I have nothing against your 
magazine in principle, except that I find its open-mindedness so open that it gets boring, 
long-winded, and wishy-washy. Sometimes it reminds me of the senators and 
representatives who, during Watergate, seemed endlessly dense, and either unable or 
unwilling to get the simple point: that Nixon was guilty, on many counts. And that was it. 
It was very simple. And yet Nixon and company did manage to obscure the obvious for 
many months, thanks to fuzzy-minded people who somehow couldn't 'snap' into 
something that was very black-and-white. They insisted on seeing it in endless shades of 
gray. And in a way I think that's what you're up to, in your magazine, a lot of the time: 
seeing endless shades of gray where it's black and white. 

There is a legitimate, indeed, very deep question, as to when that moment of 
'obviousness', that moment of 'snapping' or 'clicking', comes about. Certainly 

World Views in Collision: 


I'd be the first to say that that's as deep a question as one can ask. But that's a question 
about the nature of truth, evidence, perception, categories, and so forth and so on. It's not 
a question about parapsychology or Velikovsky et a!. If yours were a magazine about the 
nature of objectivity, I'd have no quarrel with it. I'd love to see such a magazine. But it's 
really largely a magazine that helps to lend credibility to a lot of pseudo-scientists. Not to 
say that everyone who writes for it is a pseudo-scientist! Not at all! But my view is that 
there is such a thing as being too open-minded. I am not open-minded about the earth 
being flat, about whether Hitler is alive today, about claims by people to have squared the 
circle, or to have proven special relativity wrong. I am also not open-minded with respect 
to the paranormal. And I think it is wrong to be open-minded with respect to these things, 
just as I think it is wrong to be open-minded about whether or not the Nazis killed six 
million Jews in World War II. 

I am open-minded, to some extent, about questions of ape language, dolphin 
language, and so on. I haven't reached any final, firm conclusion there. But I don't see 
that being debated in Zetetic Scholar (or in the Skeptical Inquirer). 

My viewpoint is that the Skeptical Inquirer is doing a service to the masses of the 
country, albeit indirectly, by publishing articles that have flair and dash and whose 
purpose is to combat the huge waves of nonsense that we are forced to swim in all the 
time. Of course most people will never read the Skeptical Inquirer themselves, but many 
teachers will, and will be much better equipped thereby to refute kids who come up and 
tell them about precognitive dreams and bent keys or magically fixed watches or you 
name it. 

I feel that the Skeptical Inquirer is playing the role of the chief prosecutor, in 
some sense, of the paranormal, and Zetetic Scholar is a member of the jury who refuses, 
absolutely refuses, to make a decision until more evidence is in. And after more, more, 
more, more, more, more, more, more evidence is in and this character still refuses to go 
one way or another, then one gets impatient. 

Professor Truzzi was very kind to me in his reply, and subsequently even invited 
me to serve on the board of CSAR. I had to decline because of time constraints, but I 
appreciate his-I hate to say this-open-mindedness. Part of his reply is worth repeating: 

You seem to have the idea that I am reluctant to make a decision about many 
extraordinary claims. That really is not the case. I want to make decisions and am 
emotionally inclined to the same impatience as you have. Most of my pro-paranormal 
friends see me as a die-hard skeptic. But hard-line debunkers like Martin Gardner see 
me as wishy-washy or naive. So I get it from both sides, I assure you. 

* * * 

I have quite a bit of sympathy for what Professor Truzzi is attempting to do, in a 
way. What bothers me is that all the vexing problems that he is attempting to be neutral 
on have their counterparts one level up, on the "meta-level", so to speak. That is, for 
every debate in science itself, there is an isomorphic debate in the methodology of 
science, and one could go on up the ladder of "meta"s, running and yet never advancing, 
like a 

World Views in Collision: 


hamster on a treadmill. Nixon exploited this principle very astutely in the Watergate 
days, smoking up the air with so many technical procedural and meta-procedural (etc.) 
questions that the main issues were completely forgotten about for a long time while 
people tried to sort out the mess that his smokescreen had created. This kind of technique 
need not be conscious on, the part of politicians or scientists-it can emerge as an 
unconscious consequence of simple emotional commitment to an idea or hope. 

It seems to me that object level and meta-level are hopelessly tangled here, just as 
in the Godelian knot, and the only solution is to cut the knot cleanly and get rid of it. 
Otherwise you can wallow forever in the mess. Can cardboard pyramids really sharpen 
razor blades placed underneath them? How many weeks must one wait before one gives 
up? And what if, after you've given up, a friend claims it really works if you put a fried 
egg at each corner of the pyramid? Will you then go back and try that as earnestly as you 
tried the original idea? Will you ever simply reject a claim out of hand? 

Where does one draw the line? Where is the borderline between open-mindedness 
and stupidity? Or between closed-mindedness and stupidity? Where is the optimum 
balance? That is such a deep question that I could not hope to answer it. Professor 
Truzzi's position and my own lie at different points along a spectrum. We have both 
arrived at our positions not by pristine logic, but as a result of many complex interacting 
intuitions about the world and about minds and knowledge. There is certainly no way to 
prove that my position is righter than his, or vice versa. But even if we have no adequate 
theory to formalize such decisions, we nonetheless are all walking instantiations of such 
decision-making beings, and we make decisions for which we could not formally account 
in a million years. Such decisions include all decisions of taste, whether in food, music, 
art, or science. We have to live with the fact that we do not yet know how we make such 
decisions, but that does not mean we have to wallow in indecisiveness in the meantime. 
And anything that helps to make our quick decisions more informed while not impairing 
their quickness is of tremendous importance. I view the Skeptical Inquirer as serving that 
purpose, and I heartily recommend it to my readers. 

World Views in Collision: 



On Number Numbness 

May, 1982 

The renowned cosmogonist Professor Bignumska, lecturing on the future of the 
universe, had just stated that in about a billion years, according to her calculations, the 
earth would fall into the sun in a fiery death. In the back of the auditorium a tremulous 
voice piped up: "Excuse me, Professor, but h-h-how long did you say it would be?" 
Professor Bignumska calmly replied, "About a billion years." A sigh of relief was heard. 
"Whew! For a minute there, I thought you'd said a million years." 

John F. Kennedy enjoyed relating the following anecdote about a famous French 
soldier, Marshal Lyautey. One day the marshal asked his gardener to plant a row of trees 
of a certain rare variety in his garden the next morning. The gardener said he would 
gladly do so, but he cautioned the marshal that trees of this size take a century to grow to 
full size. "In that case," replied Lyautey, "plant them this afternoon." 

In both of these stories, a time in the distant future is related to a time closer at 
hand in a startling manner. In the second story, we think to ourselves: Over a century, 
what possible difference could a day make? And yet we are charmed by the marshal's 
sense of urgency. Every day counts, he seems to be saying, and particularly so when there 
are thousands and thousands of them. I have always loved this story, but the other one, 
when I first heard it a few thousand days ago, struck me as uproarious. The idea that one 
could take such large numbers so personally, that one could sense doomsday so much 
more clearly if it were a mere million years away rather than a far-off billion years- 
hilarious! Who could possibly have such a gut- level reaction to the difference between 
two huge numbers? 

Recently, though, there have been some even funnier big-number "jokes" in 
newspaper headlines jokes such as "Defense spending over the next four years will be $1 
trillion" or "Defense Department overrun over the next four years estimated at $750 
billion". The only thing that worries me about these jokes is that their humor probably 
goes unnoticed by the average citizen. It would be a pity to allow such mirth-provoking 
notions to be appreciated only by a select few, so I decided it would be a good idea to 
devote some sp-ee to the requisite background knowledge, which also 

On Number Numbness 


happens to be one of my favorite topics: the lore of very large (and very small) numbers. 

I have always suspected that relatively few people really know the difference 
between a million and a billion. To be sure, people generally know it well enough to 
sense the humor in the joke about when the earth will fall into the sun, but what the 
difference is precisely-well, that is something else. I once heard a radio news announcer 
say, "The drought has cost California agriculture somewhere between nine hundred 
thousand and a billion dollars." Come again? This kind of thing worries me. In a society 
where big numbers are commonplace, we cannot afford to have such appalling number 
ignorance as we do. Or do we actually suffer from number numbness? Are we growing 
ever number to ever-growing numbers? 

What do people think when they read ominous headlines like the ones above? 
What do they think when they read about nuclear weapons with 20-kiloton yields? Or 60- 
megaton yields? Does the number really register -or is it just another cause for a yawn? 
"Ho hum, I always knew the Russians could kill us all 20 times over. So now it's 200 
times, eh? Well, we can be thankful it's not 2,000, can't we?" 

What do people think about the fact that in some heavily populated areas of the 
U.S., it is typical for the price of a house to be a quarter of a million dollars? What do 
people think when they hear radio commercials for savings institutions telling them that 
if they invest now, they could have a million dollars on retirement? Can everyone be a 
millionaire? Do we now expect houses to take a fourth of a millionaire's fortune? What 
ever has become of the once-glittery connotations of the word "millionaire"? 

* * * 

I once taught a small beginning physics class on the thirteenth floor of Hunter 
College in New York City. From the window we had a magnificent view of the 
skyscrapers of midtown Manhattan. In one of the opening sessions, I wanted to teach my 
students about estimates and significant figures, so I asked them to estimate the height of 
the Empire State Building. In a class of ten students, not one came within a factor of two 
of the correct answer (1,472 feet with the television antenna, 1,250 without). Most of the 
estimates were between 300 and 500 feet. One person thought 50 feet was right-a truly 
amazing underestimate; another thought it was a mile. It turned out that this person had 
actually calculated the answer, guessing 50 feet per story and 100 stories or so, thus 
getting about 5,000 feet. Where one person thought each story was 50 feet high, another 
thought the whole 102-story building was that high. This startling episode had a deep 
effect on me. 

It is fashionable for people to decry the appalling illiteracy of this generation, 
particularly its supposed inability to write grammatical English. But, what of the 
appalling innumeracy of most people, old and young, when 

On Number Numbness 


it comes to making sense of the numbers that, in point of fact, and whether they like it or 
not, run their lives? As Senator Everett Dirksen once said, "A billion here, a billion there- 
soon you're talking real money." 

The world is gigantic, no question about it. There are a lot of people, a lot of 
needs, and it all adds up to a certain degree of incomprehensibility. But that is no excuse 
for not being able to understand-or even relate to -numbers whose purpose is to 
summarize in a few symbols some salient aspects of those huge realities. Most likely the 
readers of this article are not the ones I am worried about. It is nonetheless certain that 
every reader of this article knows many people who are ill at ease with large numbers of 
the sort that appear in our government's budget, in the gross national product, corporation 
budgets, and so on. To people whose minds go blank when they hear something ending in 
"illion", all big numbers are the same, so that exponential explosions make no difference. 
Such an inability to relate to large numbers is clearly bad for society. It leads people to 
ignore big issues on the grounds that they are incomprehensible. The way I see it, 
therefore, anything that can be done to correct the rampant innumeracy of our society is 
well worth doing. As I said above, I do not expect this article to reveal profound new 
insights to its readers (although I hope it will intrigue them); rather, I hope it will give 
them the materials and the impetus to convey a vivid sense of numbers to their friends 
and students. 

* * * 

As an aid to numerical horse sense, I thought I would indulge in a small orgy of 
questions and answers. Ready? Let's go! How many letters are there in a bookstore? 
Don't calculate just guess. Did you say about a billion? That has nine zeros 
(1,000,000,000). If you did, that is a pretty sensible estimate. If you didn't, were you too 
high or too low? In retrospect, does your estimate seem far-fetched? What intuitive cues 
suggest that a billion is appropriate, rather than, say, a million or a trillion? Well, let's 
calculate it. Say there are 10,000 books in a typical bookstore. (Where did I get this? I 
just estimated it off the top of my head, but on calculation, it seems reasonable to me, 
perhaps a bit on the low side.) Now each book has a couple of hundred pages filled with 
text. How many words per page-a hundred? A thousand? Somewhere in between, 
undoubtedly. Let's just say 500. And how many letters per word? Oh, about five, on the 
average. So we have 10,000 X 200 X 500 X 5, which comes to five billion. Oh, well-who 
cares about a factor of five when you're up this high? I'd say that if you were within a 
factor of ten of this (say, between 500 million and 50 billion), you were doing pretty well. 
Now, could we have sensed this in advance-by which I mean, without calculation? 

We were faced with a choice. Which of the following twelve possibilities is the 
most likely: 

On Number Numbness 


(a) 10; 

(b) 100; 

(c) 1,000; 

(d) 10,000; 

(e) 100,000; 

(f) 1,000,000; 

(g) 10,000,000; 

(h) 100,000,000; 

(i) 1,000,000,000; 
(j) 10,000,000,000; 
(k) 100,000,000,000; 
(1) 1,000,000,000,000? 

In the United States, this last number, with its twelve zeros, is called a trillion; in most 
other countries it is called a billion. People in those countries reserve "trillion" for the 
truly enormous number 1,000,000,000,000,000,000-to us a "quintillion"-though hardly 
anyone knows that term. 

What most people truly don't appreciate is that making such a guess is very much 
the same as looking at the chairs in a room and guessing quickly if there are two or seven 
or fifteen. It is just that here, what we are guessing at is the number of zeros in a numeral, 
that is, the logarithm (to the base 10) of the number. If we can develop a sense for the 
number of chairs in a room, why not as good a sense for the number of zeros in a 
numeraP. That is the basic premise of this article. 

Of course there is a difference between these two types of numerical horse sense. 
It is one thing to look at a numeral such as "10000000000000" and to have an intuitive 
feeling, without counting, that it has somewhere around twelve zeros-certainly more than 
ten and fewer than fifteen. It is quite another thing to look at an aerial photograph of a 
logjam (see Figure 6-1) and to be able to sense, visually or intuitively or somewhere in 
between, that there must be between three and five zeros in the decimal representation of 
the number of logs in the jam-in other words, that 10,000 is the closest power of 10, that 
1,000 would definitely be too low, and that 100,000 would be too high. Such an ability is 
simply a form of number perception one level of abstraction higher than the usual kind of 
number perception. But one level of abstraction should not be too hard to handle. 

The trick, of course, is practice. You have to get used to the idea that ten is a very 
big number of zeros for a numeral to have, that five is pretty big, and that three is almost 
graspable. Probably what is most important is that you should have a prototype example 
for each number of zeros. For instance: Three zeros would take care of the number of 
students in your high school: 1,000, give or take a factor of three. (In numbers having just 
a few zeros we are always willing to forgive a factor of three or so in either direction, as 
long as we are merely estimating and not going for exactness.) Four zeros is the number 
of books in a non-huge bookstore. Five zeros is 

On Number Numbness 


FIGURE 6-1. Aerial view of a logjam in Oregon. How many logs? [Photo by Ray 
Atkeson. I 

the size of a typical county seat: 100,000 souls or so. Six zeros-that is, a million-is getting 
to be a large city: Minneapolis, San Diego, Brasilia, Marseilles, Dar es Salaam. Seven 
zeros is getting huge: Shanghai, Mexico City, Seoul, Paris, New York. Just how many 
cities do you think there are in the world with a population of a million or more? Of 
them, how many do you think you have never heard of? What if you lowered the 
threshold to 100,000? How many towns are there in the United States with a population 
of 1,000 or less? Here is where practice helps. 

I said that you should have one prototype example for each number of digits. 
Actually, that is silly. You should have a few. In order to have a concrete sense of "nine- 
zero-ness", you need to see it instantiated in several different media, preferably as diverse 
as populations, budgets, small objects (ants, coins, letters, etc.), and maybe a couple of 
miscellaneous places, such as astronomical distances or computer statistics. 

Consider the famous claim made by the McDonald's hamburger chain: "Over 25 
billion served" (or whatever they say these days). Is this figure credible? Well, if it were 
ten times bigger-that is, 250 billion-we could 

On Number Numbness 


divide by the U.S. population more easily. (This is apparent if you happen to know that 
the U.S. population is about 230 million. For the purposes of this discussion, let us call 
the U.S. population 250 million, or 2.5 X 108-a common number that everyone should 
know.) Let us imagine, then, that the claim were "Over 250 billion served". Then we 
would compute that 1,000 burgers had been cooked for every person in the U.S. But since 
we deliberately inflated it by a factor of 10, let us now undo that-let us divide our answer 
by ten, to get 100. Is it plausible that McDonald's has prepared 100 burgers for every 
person in the U.S.? Sounds reasonable to me; after all, they have been around for many 
years, and some families go there many times a year. Therefore the claim is plausible, 
and the fact that it is plausible makes it probable that it is quite accurate. Presumably, 
McDonald's wouldn't go to the trouble of updating their signs every so often if they were 
not trying to be accurate. I must say that if their earnest effort helps to reduce 
innumeracy, I approve highly of it. 

Where do all those burgers come from? A staggering figure is the number of 
cattle slaughtered every day in the U.S. It comes to about 90,000. When I first heard this, 
it sounded amazingly high, but think about it. Maybe half a pound of meat per person per 
day. Once again, the U.S. population-250 million-comes in handy. With half a pound of 
meat per person per day, that comes to 100 million pounds of meat per day-or something 
like that, anyway. We're certainly not going to worry about factors of two. How many 
tons is that? Divide by 2,000 to get 50,000 tons. But an individual animal does not yield a 
ton of meat. Maybe 1,000 pounds or so-half a ton. For each ton of meat, that would mean 
two animals were killed. So we would get about 100,000' animals biting the dust every 
day to satisfy our collective appetite. Of course, we do not eat only beef, so the true 
figure should be a bit lower. And that brings us back down to about the right figure. 

* * * 

How many trees are cut down each week to produce the Sunday edition of the 
New York Times? Say a couple of million copies are printed, each one weighing four 
pounds. That comes to about eight million pounds of paper -4,000 tons. If a tree yielded a 
ton of paper, that would be 4,000 trees. I don't know much about logging, but we cannot 
be too far off in assuming a ton per tree. At worst it would be 200 pounds of paper per 
tree, and that would mean 40,000 small trees. The logjam photograph shows somewhere 
between 7,500 and 15,000 logs, as nearly as I can estimate. So, if we do assume 200 
pounds of paper per tree, the logs in the photograph represent considerably less than half 
of one Sunday Times' worth of trees! We could go on to estimate the number of trees cut 
down every month to provide for all the magazines, books, and newspapers published in 
this country, but I'll leave that to you. 

How many cigarettes are smoked in the U.S. every year? (How many 

On Number Numbness 


zeros?) This is a classic "twelver"-on the order of a trillion. It is easy to calculate. Say 
that half of the people in the country are cigarette smokers: 100 million of them. (I know 
this is something of an overestimate; we'll compensate by reducing something else 
somewhere along the way.) Each smoker smokes-what? A pack per day? All right. That 
makes 20 cigarettes times 100 million: two billion cigarettes per day. There are 365 days 
per year, but let's say 250, since I promised to reduce something somewhere; 250 times 
two billion gives about 500 billion-half a trillion. This is just about on the nose, as it turns 
out; the last I looked (a few years ago), it was some 545 billion. I remember how awed I 
was when I first encountered this figure; it was the first time I had met up with a concrete 
number about the size of a trillion. 

By the way, "20 (cigarettes) times 100 million" is not a hard calculation, yet I bet 
it would stump many Americans, if they had to do it in their head. My way of doing it is 
to shift a factor of 10 from one number to the other. Here, I reduce 20 to 2, while 
increasing 100 to 1,000. It makes the problem into "2 times 1,000 million", and then I just 
remember that 1,000 million is one billion. I realize that this sounds absolutely trivial to 
anyone who is comfortable with figures, but it sounds truly frightening and abstruse to 
people who are not so comfortable with them-and that means most people. 

It is numbers like 545 billion that we are dealing with when we talk about a 
Defense Department overrun of $750 billion for the next four years. A really fancy 
single-user computer (the kind I wouldn't mind having) costs approximately $75,000. 
With $750 billion to throw around, we could give one to every person in New York City, 
which is to say, we could buy about ten million of them. Or, we could give $1 million to 
every person in San Francisco, and still have enough left over to buy a bicycle for 
everyone in China! There's no telling what good uses we could put $750 billion to. But 
instead, it will go into bullets and tanks and fighters and war games and missile systems 
and jet fuel and marching bands and so on. An interesting way to spend $750 billion, but 
I can think of better ways. 

* * * 

Let us think of some other kinds of big numbers. Did you know that your retina 
has about 100 million cells in it, each of which responds to some particular kind of 
stimulus? And they feed their signals back into your brain, which is now thought to 
consist of somewhere around 100 billion neurons, or nerve cells. The number of glia- 
smaller supporting cells in the brain is about ten times as large. That means you have 
about one trillion glia in your little noggin. That may sound big, but in your body 
altogether there are estimated to be about 60 or 70 trillion cells. Each one of them 
contains millions of components working together. Take the protein hemoglobin, for 
instance, which transports oxygen in the bloodstream. We each have about six billion 
trillion (that is, six thousand million million million) copies of the 

On Number Numbness 


hemoglobin molecule inside us, with something like 400 trillion of them (400 million 
million) being destroyed every second, and another 400 trillion being made! (By the way, 
I got these figures from Richard Dawkins' book The Se4fish Gene. They astounded me 
when I read them there, and so I tried to calculate them on my own. My estimates came 
out pretty close to his figures, and then, for good measure, I asked a friend in biology to 
calculate them, and she seemed to get about the same answers independently, so I guess 
they are pretty reliable.) 

The number of hemoglobin molecules in the body is about 6X 1021. It is a 
curious fact that over the past year or two, nearly everyone has become familiar, 
implicitly or explicitly, with a number nearly as big-namely, the number of different 
possible configurations of Rubik's Cube. This number -let us call it Rubik's constant-is 
about 4.3X 1019. For a very vivid image of how big this is, imagine that you have many 
cubes, an inch on each side, one in every possible configuration. Now you start spreading 
them out over the surface of the United States. How thickly covered would the U.S. be in 
cubes? Moreover, if you are working in Rubik's "supergroup", where the orientations of 
face centers matter, then Rubik's "superconstant" is 2,048 times bigger, or about 9 X 

The Ideal Toy Corporation-American marketer of the Cube-was far less daring 
than McDonald's. On their package, they softened the blow, saying merely "Over three 
billion combinations possible"-a pathetic and euphemistic underestimate if ever I heard 
one. This is the first time I have ever heard Muzak based on a pop number rather than a 
pop melody. Try these out, for comparison's sake: 

(1) "Entering San Francisco-population greater than I. 

(2) "McDonald's-over 2 served." 

(3) "Together, the superpowers have 3 pounds of TNT for every human being on 

Number 1 is off by a factor of about a million, or six orders of magnitude (factors of ten). 
Number 2 is off by a factor of ten billion or so (ten orders of magnitude), while number 3 
(which I saw in a recent letter to the editor of the Bulletin of the Atomic Scientists) is too 
small by a factor of about a thousand (three orders of magnitude). 

The hemoglobin number and Rubik's superconstant are really big. How about 
some smaller big ones, to come back to earth for a moment? All right -how many people 
would you say are falling to earth by parachute at this moment (a perfectly typical 
moment, presumably)? How many English words do you know? How many murders are 
there in Los Angeles County every year? In Japan? These last two give quite a shock 
when put side by side: Los Angeles County, about 2,000; Japan, about 900. 

Speaking of yearly deaths, here is one we are all used to sweeping under the rug, 
it seems: 50,000 dead per year (in this country alone) in car 

On Number Numbness 


accidents. If you count the entire world, it's probably two or three times that many. Can 
you imagine how we would react if someone said to us today: "Hey, everybody! I've 
come up with a really nifty invention. Unfortunately, it has a minor defect-every twelve 
years or so it will wipe out about as many Americans as the population of San Francisco. 
But wait a minute! Don't go away! The rest of you will love it, I promise!" Now, these 
statistics are accurate for cars. And yet we seldom hear people chanting, "No cars is good 
cars!" How many bumper strips have you seen that say, "No more cars!"? Somehow, 
collectively, we are willing to absorb the loss of 50,000 lives per year without any serious 
worry. And imagine that half of this-25,000 needless deaths-is due to drunks behind the 
wheel. Why aren't you just fuming? 

* * * 

I said I would be a little lighter. All right. Light consists of photons. How many 
photons per second does a 100-watt bulb put out? About 1020 another biggie. Is it bigger 
or smaller than the number of grains of sand on a beach? What beach? Say a stretch of 
beach a mile long, 100 feet wide and six feet deep. What would you estimate? Now 
calculate it. How about trying the number of drops in the Atlantic Ocean? Then try the 
number of fish in the ocean. Which are there more of: fish in the sea, or ants on the 
surface of the earth? Atoms in a blade of grass, or blades of grass on the earth? Blades of 
grass, or insects? Leaves on a typical oak tree, or hairs on a human head? How many 
raindrops fall on your town in one second during a terrific downpour? 

How many copies of the Mona Lisa have ever been printed? Let's try this one 
together. Probably it is printed in magazines in the United States a few dozen times per 
year. Say each of the magazines prints 100,000 copies. That makes a few million copies 
per year in American magazines, but then there are books and other publications. Maybe 
we should double or triple our figure for the U.S. To take into account other countries, we 
can multiply it again by three or four. Now we have hit about 100 million copies per year. 
Let us assume this held true for each year of this century. That would make nearly ten 
billion copies of the Mona Lisa! Quite a meme, eh? Probably we have made some 
mistakes along the way, but give or take a factor of ten, that is very likely about what the 
number is. 

"Give or take a factor of ten"!? A moment ago I was saying that a factor of three 
was forgivable, but now, here I am forgiving myself two factors of three-that is, an entire 
order of magnitude. Well, the reason is simple: We are now dealing with larger numbers 
(1010 instead of 105), and so it is permissible. This brings up a good rule of thumb. Say 
an error of a factor of three is permissible for each estimated factor of 100,000. That 
means we are allowed to be off by a factor of ten-one order of magnitude- when we get up 
to sizes around ten billion, or by a factor of 100 or so (two orders of magnitude) when we 
get up to the square of that, which is 1020, about 2.5 

On Number Numbness 


times the size of Rubik's constant. This means it would have been forgivable if Ideal had 
said, "Over a billion billion combinations", since then they would have been off by a 
factor of only 40-about 1.5 orders of magnitude -which is within our limits when we're 
dealing with numbers that large. 

Why should we be content with an estimate that is only one percent of the actual 
number, or with an estimate that is 100 times too big? Well, if you consider the base- 10 
logarithm of the number-the number of zeros-then if we say 18 when the real answer is 
20, we are off by only ten percent! Now what entitles us to cavalierly dismiss the 
magnitude itself and to switch our focus to its-logarithm (its order of magnitude)? Well, 
when numbers get this big, we have no choice. Our perceptual reality begins to shift. We 
simply cannot visualize the actual quantity. The numeral-the string of digits-takes over: 
our perceptual reality becomes one of numbers of zeros. When does this shift take place? 
It begins when we can no longer see, in our mind's eye, a collection of the right order of 
magnitude. For me, this "perceptual logjam" begins at about 104-the size of the actual 
logjam I remember in the photograph. It is important to understand this transition. It is 
one of the key ideas of this article. 

There are other ways to grasp 104, such as the number of soup cans that would fill 
a 50-foot shelf in a supermarket. Numbers much bigger than that, I simply cannot 
visualize. The number of tiles lining the Lincoln Tunnel between Manhattan and New 
Jersey is so enormous that I cannot easily picture it. (It is on the order of a million, as you 
can calculate for yourself, even if you've never seen it!) In any case, somewhere around 
104 or 105, my ability to visualize begins to fade and to be replaced with that second- 
order reality of the number of digits (or, to some extent, with number names such as 
"million", "billion", and "trillion"). Why it happens at this size and not, say, at 10 million 
or at 1,000 must have to do with evolution and the role that the perception of vast arrays 
plays in survival. It is a fascinating philosophical question, but one I cannot hope to 
answer here. 

In any case, a pretty good rule of thumb is this: Your estimate should be within 
ten percent of the correct answer-but this need apply only at the level of your perceptual 
reality. Therefore you are excused if you guessed that Rubik's cube has 1018 positions, 
since 18 is pretty close to 19.5, which is about what the number of digits is. (Remember 
that-roughly speakingRubik's constant is 4.3 X 1019, or 43,000,000,000,000,000,000. 
The leading factor of 4.3 counts for a bit more than half a digit, since each factor of 10 
contributes a full digit, whereas a factor of 3.16, the square root of 10, contributes half a 

If, perchance, you were to start dealing with numbers having millions or billions 
of digits, the numerals themselves (the colossal strings of digits) would cease to be 
visualizable, and your perceptual reality would be forced to take another leap upward in 
abstraction-to the number that counts the digits in the number that counts the digits in the 
number that counts the objects concerned. Needless to say, such third-order perceptual 
reality is 

On Number Numbness 


highly abstract. Moreover, it occurs very seldom, even in mathematics. Still, you can 
imagine going far beyond it. Fourth- and fifth-order perceptual realities would quickly 
yield, in our purely abstract imagination, to tenth-, hundredth-, and millionth-order 
perceptual realities. 

By this time, of course, we would have lost track of the exact number of levels we 
had shifted, and we would be content with a mere estimate of that number (accurate to 
within ten percent, of course). "Oh, I'd say about two million levels of perceptual shift 
were involved here, give or take a couple of hundred thousand" would be a typical 
comment for someone dealing with such unimaginably unimaginable quantities. You can 
see where this is leading: to multiple levels of abstraction in talking about multiple levels 
of abstraction. If we were to continue our discussion just one zillisecond longer, we 
would find ourselves smack-dab in the middle of the theory of recursive functions and 
algorithmic complexity, and that would be too abstract. So let's drop the topic right here. 

* * * 

Related to this idea of huge numbers of digits, but more tangible, is the 
computation of the famous constant 7r. How many digits have so far been calculated by 
machine? The answer (as far as I know) is one million. It was done in France a few years 
ago, and the million digits fill an entire book. Of these million, how many have been 
committed to human memory? The answer strains credulity: 20,000, according to the 
latest Guinness Book of World Records. I myself once learned 380 digits of ir, when I 
was a crazy high-school kid. My never-attained ambition was to reach the spot, 762 digits 
out in the decimal expansion, where it goes "999999", so that I could recite it out loud, 
come to those six v 9's, and then impishly say, "and so on!" Later, I met several other 
people who had outdone me (although none of them had reached that string of v 9's). All 
of us had forgotten most of the digits we once knew, but at least we all remembered the 
first 100 solidly, and so occasionally we would recite them in unison-a rather esoteric 

What would you think if someone claimed that the entire book of a million digits 
of ,7r had been memorized by someone? I would dismiss the claim out of hand. A student 
of mine once told me very earnestly that Jerry Lucas, the memory and basketball whiz, 
knew the entire Manhattan telephone directory by heart. Here we have a good example of 
how innumeracy can breed gullibility. Can you imagine what memorizing the Manhattan 
telephone directory would involve? To me, it seems about two orders of magnitude 
beyond credibility. To memorize one page seems fabulously difficult. To memorize ten 
pages seems at about the limit of credibility. Incidentally, memorizing the entire Bible 
(which I have occasionally heard claimed) seems to me about equivalent to memorizing 
ten pages of the phone book, because of the high redundancy of written language and the 
regularity of events in the world. But to have memorized 1,500 dense pages 

On Number Numbness 


of telephone numbers, addresses, and names is literally beyond belief. I'll eat my hat-in 
fact, all of my 10,000 hats-if I'm wrong. 

* * * 

There are some phenomena for which there are two (or more) scales with which 
we are equally comfortable, depending on the circumstances. Take pitch in music. If you 
look at a piano keyboard, you will see a linear scale along which pitch can be measured. 
The natural thing to say is: "This A is nine semitones higher than that C, and the C is 
seven semitones higher than that F, so the A is 16 semitones higher than the F." It is an 
additive, or linear, scale. By this I mean that if you assigned successive whole numbers to 
successive notes, then the distance from any note to any other would be given by the 
difference between their numbers. Only addition and subtraction are involved. 

By contrast, if you are going to think of things acoustically rather than auditorily, 
physically rather than perceptually, each pitch is better described in terms of its frequency 
than in terms of its position on a keyboard. The low A at the bottom of the keyboard 
vibrates about 27 times per second, whereas the C three semitones above it vibrates about 
32 times per second. So you might be inclined to guess that in order to jump up three 
semitones one should always add five cycles per second. Not so. You should always 
multiply by about 32/27 instead. If you jump up twelve semitones, that means four 
repeated up jumps of three semitones. 

Thus, when you have gone up one octave (twelve semitones), your pitch has been 
multiplied by 32/27 four times in a row, which is 2. Actually, the fourth power of 32/27 is 
not quite 2, and since an octave represents a ratio of exactly 2, 32/27 must be a slight 
underestimate. But that is beside the point. The point is that the natural operations for 
comparing frequencies are multiplication and division, whereas the natural operations for 
note numbers on a keyboard are addition and subtraction. What this means is that the note 
numbers are logarithms of the frequencies. Here is a case where we think naturally in 
logarithms ! 

Here is a different way of putting things. Two adjacent notes near the top of a 
piano keyboard differ in frequency by about 400 cycles per second, whereas adjacent 
notes near the bottom differ by only about two cycles per second. Wouldn't that seem to 
imply that the intervals are wildly different? Yet to the human ear, the high and the low 
interval sound exactly the same! 

Logarithmic thinking happens when you perceive only a linear increase even if 
the thing itself doubles in size. For instance, have you ever marveled at the fact that 
dialing a mere seven digits can connect any telephone to any other in the New York 
metropolitan area, where some 10 million people live? Suppose New York were to 
double in population. Would you then have to add seven more digits to each phone 
number, making fourteen-digit numbers, in order to reach those twenty million people? 
Of course not. 

On Number Numbness 


Adding seven more digits would multiply the number of possibilities by ten million. In 
fact, adding merely three digits (the area code in front) enables you to reach any phone 
number in North America. This is simply because each new digit creates a tenfold 
increase in the number of phones reachable. Three more digits will always multiply your 
network by a factor of 1,000: three orders of magnitude. Thus the length of a phone 
number-the quantity directly perceived by you when you are annoyed at how long it takes 
to dial a long-distance number-is a logarithmic measure of the size of the network you are 
embedded in. That is why it is preposterous to see huge long numbers of 25 or 30 digits 
used as codes for people or products when, without any doubt, a few digits would suffice. 

I once was sent a bill asking that I transfer a fee to account No. 60802-620-1-1- 
721000-421-01062 in a bank in Yugoslavia. For a while this held my personal record for 
absurdity of numbers encountered in business transactions. Recently, however, I was sent 
my car registration form, at the bottom of which I found this enlightening constant: 
010101361218200301070014263117241512003603600030002. For good measure it was 
followed, a few blank spaces later, by v 19283'. 

One place where we think logarithmically is number names. We in America have 
a new name every three zeros (up to a certain point): from thousand to million to billion 
to trillion. Each jump is "the same size", in a sense. That is, a billion is exactly as much 
bigger than a million as a million is bigger than a thousand. Or a trillion is to a billion 
exactly as a billion is to a million. On the other hand, does this continue forever? For 
instance, does it seem reasonable to say that 10103 is to 10100 exactly as a million is to a 
thousand? I would be inclined to say "No, those big numbers are almost the same size, 
whereas a thousand and a million are very different." It is a little tricky because of the 
shifts in perceptual reality. 

In any case, we seem to run out of number names at about a trillion. To be sure, 
there are some official names for bigger numbers, but they are about as familiar as the 
names of extinct dinosaurs: "quadrillion", "octillion", "vigintillion", "brontosillion", 
"triceratillion", and so on. We are simply not familiar with them, since they died off a 
dinosillion years ago. Even "billion" presents cross-cultural problems, as I mentioned 
above. Can you imagine what it would be like if in Britain, "hundred" meant 1,000? The 
fact is that when numbers get too large, people's imaginations balk. It is too bad, though, 
that a trillion is the largest number with a common name. What is going to happen when 
the defense budget gets even more bloated? Will we just get number? Of course, like the 
dinosaurs, we may never be granted the luxury of facing that problem. 

* * * 

The speed of automatic computation is something whose progress is best charted 
logarithmically. Over the past several decades, the number of 

On Number Numbness 


primitive operations (such as addition or multiplication) that a computer can carry out per 
second has multiplied tenfold about every seven years. Nowadays, it is some 100 million 
operations per second or, on the fanciest machines, a little more. Around 1975, it was 
about 10 million operations per second. In the later 1960's, one million operations per 
second was extremely fast. In the early 1960's, it was 100,000 operations per second. 
10,000 was high in the mid-1950's, 1,000 in the late 1940's-and in the early 1940's, 100. 

In fact, in the early 1940's, Nicholas Fattu was the leader of a team at the 
University of Minnesota that was working for the Army Air Force on some statistical 
calculations involving large matrices (about 60X60). He brought about ten people 
together in a room, each of whom was given a Monroematic desk calculator. These 
people worked full-time for ten months in a coordinated way, carrying out the 
computations and cross-checking each other's results as they went along. About twenty 
years later, out of curiosity, Professor Fattu redid the calculations on an IBM 704 in 
twenty minutes. He found that the original team had made two inconsequential errors. 
Nowadays, of course, the whole thing could be done on a big "mainframe" computer in a 
second or two. 

Still, modern computers can easily be pushed to their limits. The notorious 
computer proof of the four-color theorem, done at the University of Illinois a few years 
ago, took 1,200 hours of computer time. When you convert that into days, it sounds more 
impressive: 50 full 24-hour days. If the computer was carrying out twenty million 
operations per second, that would come to 1014, or 100 trillion, primitive operations-a 
couple of hundred for every cigarette smoked that year in the U.S. Whew! 

A computer doing a billion operations per second would really be moving along. 
Imagine breaking up one second into as many tiny fragments as there are seconds in 30 
years. That is how tiny a nanosecond-a billionth of a second-is. To a computer, a second 
is a lifetime! Of course, the computer is dawdling compared with the events inside the 
atoms that compose it. Take one atom. A typical electron circling a typical nucleus makes 
about 1015 orbits per second, which is to say, a million orbits per nanosecond. From an 
electron's-eye point of view, a computer is as slow as molasses in January. 

Actually, an electron has two eyes with which to view the situation. It has both an 
orbital cycle time and a rotational cycle time, since it is spinning on its own axis. Now, 
strictly speaking, "spin" is just a metaphor at the quantum level, so you should take the 
following with a big grain of salt. Nevertheless, if you imagine an electron to be a 
classically (non-quantum-mechanically) spinning sphere, you can calculate its rotation 
time from its known spin angular momentum (which is about Planck's constant, or 10-4 
joule-second) and its radius (which we can equate with its Compton wavelength, which is 
about 10-10 centimeter). The spin time turns out to be about 10-20 second. In other 
words, every time the superfast computer adds two numbers, every electron inside it has 
pirouetted on its own axis about 

On Number Numbness 


100 billion times. (If we took the so-called "classical radius" of the electron instead, we 
would have the electron spinning at about 1024 times per second -enough to make one 
dizzy! Since this figure violates both relativity and quantum mechanics, however, let us 
be content with the first figure.) 

At the other end of the scale, there is the slow, stately twirling of our galaxy, 
which makes a leisurely complete turn every 200 million years or so. And within the 
solar system, the planet Pluto takes about 250 years to complete an orbit of the sun. 
Speaking of the sun, it is about a million miles across and has a mass on the order of lO'O 
kilograms. The earth is a featherweight in comparison, a mere 1024 kilograms. And we 
should not forget that there are some stars-red giants-of such great diameter that they 
would engulf the orbit of Jupiter. Of course, such stars are very tenuous, something like 
cotton candy on a cosmic scale. By contrast, some stars-neutron stars-are so tightly 
packed that if you could remove from any of them a cube a millimeter on an edge, its 
mass would be about half a million tons, equal to the mass of the heaviest oil tanker ever 
built, fully loaded! 

* * * 

These large and small numbers are so far beyond our ordinary comprehension that 
it is virtually impossible to keep on being more amazed. The numbers are genuinely 
beyond understanding-unless one has developed a vivid feeling for various exponents. 
And even with such an intuition, it is hard to give the universe its awesome due for being 
so extraordinarily huge and at the same time so extraordinarily fine-grained. Number 
numbness sets in early these days. Most people seem entirely unfazed by words such as 
"billion" and "trillion"; they simply become synonyms for the meaningless "zillion". 

This hit me particularly hard a few minutes after I had finished a draft of this 
column. I was reading the paper, and I came across an article on the subject of nerve gas. 
It stated that President Reagan expected the expenditures for nerve gas to come to about 
$800 million in 1983, and $1.4 billion in 1984. I was upset, but I caught myself being 
thankful that it was not $10 billion or $100 billion. Then, all at once, I really felt ashamed 
of myself. That guy has some nerve gas ! How could I have been relieved by the figure of 
a "mere" $1.4 billion? How could my thoughts have become so dissociated from the 
underlying reality? One billion for nerve gas is not merely lamentable; it is odious. We 
cannot afford to become number-number than we are. We need to be willing to be jerked 
out of our apathy, because this kind of "joke" is in very poor taste. 

Survival of our species is the name of the game. I don't really care if the number 
of mosquitoes in Africa is greater or less than the number of pennies in the gross national 
product. I don't care if there are more glaciers in the Dead Sea or scorpions in Antarctica. 
I don't care how tall a stack of one billion dollar bills would be (an image that President 
Reagan evoked in 

On Number Numbness 


a speech decrying the size of the national debt created by his predecessors). I don't care a 
hoot about pointless, silly images of colossal magnitudes. What I do care about is what a 
billion dollars represents in terms of buying power: lunches for all the schoolkids in New 
York for a year, a hundred libraries, fifty jumbo jets, a few years' budget for a large 
university, one battleship, and so on. Still, if you love numbers (as I do), you can't help 
but blur the line between number play and serious thinking, because a silly image 
converts into a more serious image quite fluidly. But frivolous number virtuosity, 
enjoyable though it is, is far from the point of this article. 

What I hope people will get out of this article is not a few amusing tidbits for the 
next cocktail party, but an increased passion about the importance of grasping large 
numbers. I want people to understand the very real consequences of those very surreal 
numbers bandied about in the newspaper headlines as interchangeably as movie stars' 
names in the scandal sheets. That's the only reason for bringing up all the more humorous 
examples. At bottom, we are dealing with perceptual questions, but ones with life-and- 
death consequences! 

* * * 

Combatting number numbness is basically not so hard. It simply involves getting 
used to a second set of meanings for small numbers-namely, the meanings of numbers 
between say, five and twenty, when used as exponents. It would seem revolutionary for 
newspapers to adopt the convention of expressing large numbers as powers of ten, yet to 
know that a number has twelve zeros is more concrete than to know that it is called a 

I wonder what percentage of our population, if shown the numerals 
"314,159,265,358,979" and "271,828,182,845", would recognize that the former 
magnitude is about 1,000 times greater than the latter. I am afraid that the vast majority 
would not see it and would not even be able to read these numbers out loud. If that is the 
case, it is something to be worried about. 

One book that attempts valiantly and poetically to combat such numbness, a book 
filled with humility before some of the astounding magnitudes that we have been 
discussing, is called Cosmic View: The Universe in Forty Jumps, by a Dutch 
schoolteacher, the late Kees Boeke. In his book, Boeke takes us on an imaginary voyage 
in pictures, in which each step is an exponential one, involving a factor of ten in linear 
size. From our own size, there are 26 upward steps and 13 downward steps. It is probably 
not coincidental that the book was written by someone from Holland, since the Dutch 
have long been internationally minded, living as they do in a small and vulnerable 
country among many languages and cultures. Boeke closes in what therefore seems to me 
to be a characteristically Dutch way, by pleading that his book's journey will help to 
make people better realize their 

On Number Numbness 


place in the cosmic scheme of things, and in this way contribute to drawing the world 
closer together. Since I find his conclusion eloquent, I would like to close by quoting 
from it: 

When we thus think in cosmic terms, we realize that man, if he is to become 
really human, must combine in his being the greatest humility with the most careful 
and considerate use of the cosmic powers that are at his disposal. 

The problem, however, is that primitive man at first tends to use the power 
put in his hands for himself, instead of spending his energy and life for the good of 
the whole growing human family, which has to live together in the limited space of 
our planet. It therefore is a matter of life and death for the whole of mankind that 
we learn to live together, caring for one another regardless of birth or upbringing. 
No difference of nationality, of race, creed or conviction, age or sex may weaken 
our effort as human beings to live and work for the good of all. 

It is therefore an urgent need that we all, children and grown-ups alike, be 
educated in this spirit and toward this goal. Learning to live together in mutual 
respect and with the definite aim to further the happiness of all, without privilege 
for any, is a clear duty for mankind, and it is imperative that education be brought 
onto this plane. 

In this education the development of a cosmic view is an important and 
necessary element; and to develop such a wide, all-embracing view, the expedition 
we have made in these v forty jumps through the universe' may help just a little. If 
so, let us hope that many will make it! 

Post Scriptum. 

By coincidence, in the same issue of Scientific American as this column appeared 
in, there was a short note in "Science and the Citizen" on the American nuclear arsenal. 
The information, compiled by the Center for Defense Information and the National 
Resources Defense Council, stated that the current stockpile amounted to some 30,000 
nuclear weapons, 23,000 of which were operational. (An excellent way of visualizing this 
is shown in Figure 33-2, the last figure in the book.) The Reagan administration, it said, 
intended to build about 17,000 in the next ten years while destroying about 7,000, thus 
increasing the net arsenal by about 10,000 nuclear weapons. 

This is roughly equivalent to ten tons of TNT per Russian capita. Now what does 
this really mean? Wolf H. Fahrenbach had the same nagging question, and he wrote to 
tell me what he discovered. 

Ten tons of TNT exceeds my numericity, so I asked a demolitions-expert friend of 
mine what one pound, ten pounds, 100 pounds, etc. of TNT could do. One pound of 
TNT in a car kills everybody within and leaves a fiery wreck; ten 

On Number Numbness 


pounds totally demolishes the average suburban home; and 1,000 pounds packed 
inside an old German tank sent the turret to disappear in low overhead clouds. It 
could be reasonably suggested to the administration that most civilized nations are 
content with simply killing every last one of their enemies and that there is no 
compelling reason to have to ionize them. 

Now this was interesting to me, because I happened to remember that the 241 marines 
killed in the recent truck-bombing in Beirut had been in a building brought down by what 
was estimated as one ton of TNT. Ten tons, if well placed, might have done in 2,400 
people, I suppose. Ten tons is my allotment, and yours as well. That's the kind of 
inconceivable overkill we are dealing with in the nuclear age. 

Another way of looking at it is this. There are about 25,000 megatons of nuclear 
weapons in the world. If we decode the "mega" into its meaning of "million", and "ton" 
into "2,000 pounds", we come up with 25,000X 1,000,000 X 2,000 pounds of TNT- 
equivalent, which is 50,000,000,000,000 pounds to be distributed among us all, perhaps 
not equally-but surely there's enough to go around. 

I find myself oscillating between preferring to see it spelled out that way with all 
the zeros, and leaving it as 25,000 megatons. What I have to remember is what 
"megaton" really means. Last summer I visited Paris and climbed the butte of 
Montmartre, from the top of which, at the foot of the Sacre Coeur, one has a beautiful 
view of all of Paris spread out below. I couldn't refrain from ruining my two friends' 
enjoyment of this splendid panorama, by saying, "Hmm ... I bet one or two nicely placed 
megatons would take care of all this." And so saying, I could see exactly how it might 
look (provided I were a superbeing whose eyes could survive light and heat blasts far 
brighter than the sun). I know it seems ghoulish, yet it was also completely in keeping 
with my thoughts of the time. 

Now if you just say to yourself "one megaton equals Paris's doom" (or some 
suitable equivalent), then I think that the phrase "25,000 megatons" will become as vivid 
as the long string of zeros-in fact, probably more vivid. It seems to me that this perfectly 
illustrates how the psychological phenomenon known as chunking is of great importance 
in dealing with otherwise incomprehensible magnitudes. 

Chunking is the perception as a whole of an assembly of many parts. An excellent 
example is the difference between 100 pennies and the concept of one dollar. We would 
find it exceedingly hard to deal with the prices of cars and houses and computers if we 
always had to express them in pennies. A dollar has psychological reality, in that we 
usually do not break it down into its pieces. The concept is valuable for that very reason. 

It seems to me a pity that the monetary chunking process stops at the dollar level. 
We have inches, feet, yards, miles. Why could we not have pennies, dollars, grands, 
megs, gigs? We might be better able to digest newspaper headlines if they were 
expressed in terms of such chunked units -provided that those units had come to mean 
something to us, as such. We 

On Number Numbness 


all have a pretty good grasp of the notion of a grand. But what can a meg or a gig buy you 
these days? How many megs does it take to build a high school? How many gigs is the 
annual budget of your state? 

Most numerically-oriented people, in order to answer these questions, will have to 
resort to calculation. They do not have such concepts at their mental fingertips. But in a 
numerate populace, everyone should. It should be a commonplace that a new high school 
equals about 20 megs, a state budget several gigs, and so on. These terms should not be 
thought of as shorthand for "million dollars" and "billion dollars" any more than "dollar" 
is a shorthand for "100 cents". They should be autonomous conceptsmental "nodes"-with 
information and associations dangling from them without any need for conversion to 
some other units or calculation of any sort. 

If that kind of direct sense of certain big numbers were available, then we would 
have a much more concrete grasp on what otherwise are nearly hopeless abstractions. 
Perhaps it is in the vast bureaucracies' interest that their budgets remain opaque and 
impenetrable-but even that holds true only in the short run. Economic ruin and military 
suicide are not good for anybody in the long run-not even arms manufacturers ! The more 
transparent the realities are, the better it is for any society in the long run. 

* * * 

This kind of total incomprehension extends even to the highest echelons of our 
society. Bucknell University President Dennis O'Brien recently wrote on the .Vew York 
Times op-ed page: "My own university has just opened a multibillion-dollar computer 
center and prides itself that 90 percent of its graduates are computer-literate." And the 
Associated Press distributed an article that said that the U.S. federal debt ceiling had gone 
up to 1.143 trillion dollars, and then cited the latest figure for the debt itself as 
"$1,070,241,000". In that case, what's the hurry about raising the ceiling? These may 
have been typos, but even so, they betray our society's rampant innumeracy. 

You may think I am being nitpicky, but when our populace is so boggled by large 
numbers that even many university-educated people listen to television broadcasts 
without an ounce of comprehension of the numbers involved, I think something has gone 
haywire somewhere. It is a combination of numbness, apathy, and a resistance to 
recognizing the need for new concepts. 

One reader, a refugee from Poland, wrote to me, complaining that I had 
memorized hundreds of digits of Tr in my high school days without appreciating the 
society that afforded me this luxury. In East Block countries, he implied, I would never 
have felt free to do something so decadent. My feeling, though, is that memorizing 7r was 
for me no different from any other kind of exuberant play that adolescents in any country 
engage in. In a recent book by Stephen B. Smith, called The Great Mental 

On Number Numbness 


Calculators-a marvelously engaging book, by the way-one can read the fascinating life 
stories of people who were far better than I with figures. Many of them grew up in dismal 
circumstances, and numbers to them were like playmates, life-saving friends. For them, 
to memorize rr would not be decadent; it would be a source of joy and meaning. Now I 
had read about some of these people as a teen-ager, and I admired, even envied, their 
abilities. My memorization of Ir was not an isolated stunt, but part of an overall campaign 
to become truly fluent with numbers, in imitation of calculating prodigies. Undoubtedly 
this helped lead me toward a deeper appreciation of numbers of all sizes, a better 
intuition, and in some intangible ways, a clearer vision of just what it is that the 
governments on this earth-West Block no less than East-are up to. 

But there may be more direct routes to that goal. For example, I would suggest to 
interested readers that they attempt to build up their own numeracy in a very simple way. 
All they need to do is to get a sheet of paper and write down on it the numbers from 1 to 
20. Then they should proceed to think a bit about some large numbers that seem of 
interest to them, and try to estimate them within one order of magnitude (or two, for the 
larger ones). By "estimate" here, I mean actually do a back-of-the-envelope (or mental) 
calculation, ignoring all but factors of ten. Then they should attach the idea to the 
computed number. Here are some samples of large numbers: 

• What's the gross state product of California? 

• How many people die per day on the earth? 

• How many traffic lights are there in New York City? 

• How many Chinese restaurants are there in the U.S.? 

• How many passenger- miles are flown each day in the U.S.? 

• How many volumes are there in the Library of Congress? 

• How many notes are played in the full career of a concert pianist? 

• How many square miles are there in the U.S.? How many of them have 
you been in? 

• How many syllables have been uttered by humans since 1400 A.D.? 

• How many "300" games are bowled in the U.S. per year? 

• How many stitches are there in a stocking? 

• How many characters does one need to know to read a Chinese 

• How many sperms are there per ejaculate? 

• How many condors remain in the U.S.? 

• How many moving parts are in the Columbia space shuttle? 

• How many people in the U.S. are called "Michael Jackson"? "Naomi 

• What volume of oil is removed from the earth each year? 

• How many barrels of oil are left in the world? 

• How much carbon monoxide enters the atmosphere each year in auto 
exhaust fumes? 

On Number Numbness 


• How many meaningful, grammatical, ten- word sentences are there in 

• How long did it take the 200-inch mirror of the Palomar telescope to 
cool down? 

• What angle does the earth's orbit subtend, as seen from Sirius? 

• What angle does the Andromeda galaxy subtend, as seen from earth? 

• How many heartbeats does a typical creature live? 

• How many insects (of how many species) are now alive? 

• How many giraffes are now alive? Tigers? Ostriches? Horseshoe crabs? 

• What are the pressure and temperature at the bottom of the ocean? 

• How many tons of garbage does New York City put out each week? 

• How many letters did Oscar Wilde write in his lifetime? 

• How many typefaces have been designed for the Latin alphabet? 

• How fast do meteorites move through the atmosphere? 

• How many digits are in 720 factorial? 

• How much is a brick of gold worth? 

• How many gold bricks are there in Fort Knox? How much is it worth? 

• How fast do your wisdom teeth grow (in miles per hour, say)? 

• How fast does your hair grow (again in miles per hour)? 

• How fast is Venice sinking? 

• How far is a million feet? A billion inches? 

• What is the weight of the Empire State Building? Of Hoover Dam? Of 
a fully loaded jumbo jet? 

• How many commercial airline takeoffs occur each year in the world? 

These or similar questions will do. The main thing is to attach some concreteness 
to those numbers from 1 to 20, seen as exponents. They are like dates in history. At first, 
a date like "1685" may be utterly meaningless to you, but if you love music and find out 
that Bach was born that year, all of a sudden it sticks. Likewise with this secondary 
meaning for small numbers. I can't guarantee it will work miracles, but you may increase 
your own numeracy and you may also help to increase others'. Merry numbers! 

On Number Numbness 



Changes in Default Words 
and Images, Engendered 
by Rising Consciousness 

November, 1982 

A father and his son were driving to a ball game when their car stalled on the railroad 
tracks. In the distance a train whistle blew a warning. Frantically, the father tried to start 
the engine, but in his panic, he couldn't turn the key, and the car was hit by the onrushing 
train. An ambulance sped to the scene and picked them tip. On the way to the hospital, 
the father died. The son was still alive but his condition was very serious, and he needed 
immediate surgery. The moment they arrived at the hospital, he was wheeled into an 
emergency operating room, and the surgeon came in, expecting a routine case. However, 
on seeing the boy, the surgeon blanched and muttered, "I can't operate on this boy-he's 
my son." 

What do you make of this grim riddle? How could it be? Was the surgeon lying or 
mistaken? No. Did the dead father's soul somehow get reincarnated in the surgeon's 
body? No. Was the surgeon the boy's true father and the dead man the boy's adopted 
father? No. What, then, is the explanation? Think it through until you have figured it out 
on your own-I insist! You'll know when you've got it, don't worry. 

* * * 

When I was first asked this riddle, a few years ago, I got the answer within a minute or 
so. Still, I was ashamed of my performance. I was also disturbed by the average 
performance of the people in the group I was with-all educated, intelligent people, some 
men, some women. I was neither the quickest nor the slowest. A couple of them, even 
after five minutes of scratching their heads, still didn't have the answer! And when they 
finally hit upon it, their heads hung low. 

Changes in Default Words and Images, Engendered by Rising Consciousness 


Whether we light upon the answer quickly or slowly, we all have something to 
learn from this ingenious riddle. It reveals something very deep about how so-called 
default assumptions permeate our mental representations and channel our thoughts. A 
default assumption is what holds true in what you might say is the "simplest" or "most 
natural" or "most likely" possible model of whatever situation is under discussion. In this 
case, the default assumption is to assign the sex of male to the surgeon. The way things 
are in our society today, that's the most plausible assumption. But the critical thing about 
default assumptions-so well revealed by this story-is that they are made automatically, 
not as a result of consideration and elimination. You didn't explicitly ponder the point and 
ask yourself, "What is the most plausible sex to assign to the surgeon?" Rather, you let 
your past experience merely assign a sex for you. Default assumptions are by their nature 
implicit assumptions. You never were aware of having made any assumption about the 
surgeon's sex, for if you had been, the riddle would have been easy! 

Usually, relying on default assumptions is extremely useful. In fact, it is 
indispensable in enabling us-or any cognitive machine-to get around in this complex 
world. We simply can't afford to be constantly distracted by all sorts of theoretically 
possible but unlikely exceptions to the general rules or models that we have built up by 
induction from many past experiences. We have to make what amount to shrewd guesses- 
and we do this with great skill all the time. Our every thought is permeated by myriads of 
such shrewd guesses-assumptions of normalcy. This strategy seems to work pretty well. 
For example, we tend to assume that the stores lining the main street of a town we pass 
through are not just cardboard facades, and for good reason. Probably you're not worried 
about whether the chair you're sitting on is about to break. Probably the last time you 
used a salt shaker you didn't consider that it might be filled with sugar. Without much 
trouble, you could name dozens of assumptions you're making at this very moment-all of 
which are simply probably true, rather than definitely true. 

This ability to ignore what is very unlikely-without even considering whether or 
not to ignore it! -is part of our evolutionary heritage, coming out of the need to be able to 
size up a situation quickly but accurately. It is a marvelous and subtle quality of our 
thought processes; however, once in a while, this marvelous ability leads us astray. And 
sexist default assumptions are a case in point. 

* * * 

When I wrote my book GOdel, Escher, Bach: an Eternal Golden Braid, I employed the 
dialogue form, a form I enjoy very much. I was so inspired by Lewis Carroll's dialogue 
"What the Tortoise Said to Achilles" that I decided to borrow his two characters. Over 
time I developed them into my own characters. As I proceeded, I found that I was 
naturally led to bringing 

Changes in Default Words and Images, Engendered by Rising Consciousness 


in some new characters of my own. The first one was the Crab. Then came the Anteater, 
the Sloth, and various other colorful characters. Like the Tortoise and Achilles, the new 
characters were ali male: Mr. Crab, Mr. Sloth, and so on. 

This was in the early 70's, and I was quite conscious of what I was doing. Yet for 
some reason, I could not get myself to invent a female character. I was upset with myself, 
yet I couldn't help feeling that introducing a female character "for no reason" would be 
artificial and therefore too distracting. I didn't want to mix sexual politics-an ugly real- 
world issue-with the ethereal pleasures of an ideal fantasy world. 

I racked my brains on this for a long time, and even wrote an apologetic dialogue 
on this very topic-an intricate one in which I myself figured, discussing, with my own 
characters, the question of sexism in writing. Aside from my friends Achilles and the 
Tortoise, the cast featured God as a surprise visitor-and, as in the old joke, she was black. 
Though corny, it was an earnest attempt to grapple with some problems of conscience 
that were plaguing me. The dialogue never got polished, and was not included in my 
book. However, a series of reworkings gradually turned it into the "Six-Part Ricercar" 
with which the book concludes. 

My pangs of conscience did lead me to making a few minor characters female: 
there were Prudence and Imprudence, who briefly argued about consistency; Aunt 
Hillary, a conscious ant colony; and every even-numbered member of the infinite series 
Genie, Meta-genie, Meta-meta-genie, and so on. I was particularly proud of this gentle 
touch. But no matter how you slice it, females got the short end of the stick in GEB. I was 
not altogether happy with that, but that's the way it was. 

Aside from its dialogues being populated with male characters, the book was also 
filled with default assumptions of masculinity: the standard "he" and "his" always being 
chosen. I made no excuse for this. I gave my reader credit for intelligence; I assumed he 
would know that often, occurrences of such pronouns carry no gender assumptions but 
simply betoken a "unisex" person. 

Over a period of time, however, I have gradually come to a different feeling about 
how written language should deal with people of unspecified sex, or with supposedly 
specific but randomly chosen people. It is a very subtle issue, and I do not claim to have 
the final answers by any means. But I have discovered some approaches that please me 
and that may be useful for other people. 

* * * 

What woke me up? Given that I was already conscious of the issues, what new 
element did it take to induce this shift? Well, one significant incident was the telling of 
that surgeon riddle. My own reaction to it and the reactions of my companions surprised 
me. To most of us, bizarre worlds 

Changes in Default Words and Images, Engendered by Rising Consciousness 


with such things as reincarnation came more easily to mind than the idea that a surgeon 
could be a woman! How ludicrous! The event underscored for me how deeply ingrained 
are our default assumptions, and how unaware we are of them. This seemed to me to have 
potential consequences far beyond what one might naively think. I am hardly one to 
believe that language "pushes us around", that we are its slaves-yet on the other hand, I 
feel that we must do our best to rid our language of usages that may induce or reinforce 
default assumptions in our minds. 

One of the most vivid examples of this came a couple of years after my book had 
been published. I was describing its dialogues to a group of people, and I said I regretted 
that the characters had all been male. One woman asked me why, and I replied, "Well, I 
began with two males Achilles and the Tortoise-and it would have been distracting to 
introduce females seemingly for no reason except politics ..." Yet as I heard myself 
saying this, a horrifying thought crept into my mind for the first time: How did I know 
the Tortoise was really a male? Surely he was, wasn't he? Obviously! I seemed to 
remember that very well. 

And yet the question nagged at me. As I had a copy of my book at hand, with the 
Carroll dialogue reprinted in it, I turned to it for verification. I was nonplussed to see that 
Carroll nowhere even hints at the sex of his Tortoise! In fact, the opening sentence runs 
thus: "Achilles had overtaken the Tortoise, and had seated himself comfortably upon its 
back." This is the only occurrence of "it"; from there on, "the Tortoise" is what Carroll 
writes. "Mr. Tortoise", indeed! Was this entirely a product of my own defaults? 

Probably not. The first time I had heard about the Carroll dialogue, many years 
earlier, someone-a male-had described it to me. This person very likely had passed on his 
default assumption to me. So I could claim innocence. Moreover, I realized, I had read a 
few responses in philosophy journals to the Carroll dialogue, and when I went back and 
looked at them, I found that they too had featured a "sexed" Tortoise, in contrast to the 
way Carroll had carefully skirted the issue. Though I felt somewhat exonerated, I was 
still upset. I kept on asking myself, "What if I had envisioned a female Tortoise to begin 
with? Then how would GEB have been?" This was a most provocative counterfactual 

One thing that had dissuaded me from using female characters was the 
distractingly political way that some books had of referring to the reader or briefly 
mentioned random people (such as "the student" or "the child") as "she" or "her". It stuck 
out like a sore thumb, and made one think so much about sexism that the main point of 
the passage often went unnoticed. It seemed to me that such a strategy might be too blunt 
and simplistic, and could easily turn more people off than on. 

And yet I couldn't agree with the attitude of some people-largely but by no means 
exclusively men-who refused to switch their usage on grounds of "tradition", "linguistic 
purity", "beauty of the language", and so on. To 

Changes in Default Words and Images, Engendered by Rising Consciousness 


be sure, words like "fireperson", "snowperson", "henchperson", and "personhandle" are 
unappealing-but they aren't your only recourse! There are other options. 

In the introduction to Robert Nozick's Philosophical Explanations-an exciting and 
admirable book on philosophy-I came across this footnote. "I do not know of a way to 
write that is truly neutral about pronoun gender yet does not constantly distract attention- 
at least the contemporary reader's-from the sentence's central content. I am still looking 
for a satisfactory solution." From this point on, Nozick uses "he" and "him" nearly 
everywhere. My reaction was annoyance: could Nozick have really looked very hard? 
Part of my annoyance was undoubtedly due to my own guilt feelings for having done no 
better in GEB, but some was due to my feeling that Nozick had failed to see a fascinating 
challenge here-one to which he could bring his philosophical insight, and in doing so, 
make a creative contribution to society. 

* * * 

As best I can recall, I first begad seriously trying to "demasculinize" my prose in 
working on the dialogue on the Turing Test that eventually wound up as my 
"Metamagical Themas" column for May, 1981, and which is Chapter 22 in this book. I 
wrote the dialogue with the sexes of the characters shifting about fluidly in my mind, 
since I was modeling the characters on mixtures of various people I knew. I always 
imagined the, character I most agreed with more as female than as male, and the others 

One day, it occurred to me that the beginning of the dialogue discussed Turing's 
question "Can you in principle tell, merely from a written dialogue, a female from a 
male?" This question applied so well to the very characters discussing it that I could not 
resist making some character "ambisexual"-ambiguous in terms of sex. Thus I named one 
of them "Pat". Soon I realized there was no reason not to extend this notion to all the 
characters in the dialogue, making it a real guessing game for readers. Thus were born 
"Sandy", "Chris", and "Pat". 

Writing this dialogue was a turning point for me. Even though its total sexual 
equality had been motivated by my desire to give the dialogue an interesting self- 
referential twist, I found that I was very relieved to have broken out of the all-male mold 
that I had earlier felt locked into. I started looking for more ways to make up for my past 
default sexism. 

It was not easy, and still is not. For example, in teaching classes, I find myself 
wanting to use the pronoun "she" to refer back to an earlier unspecified person-a random 
biologist, say, or a random logician. Yet I find it doesn't seem to come out of my mouth 
easily. What I have trained myself to do rather well is to avoid gender-laden pronouns 
altogether, thus, like Carroll, "skirting" the issue. Sometimes I just keep on saying "the 
logician" over and over again, or perhaps I just say "the person" or "that 

Changes in Default Words and Images, Engendered by Rising Consciousness 


person". Every once in a while, I say "he or she" (or "he" or "she"), although I have to 
admit that I more often simply say "they". 

Someone who, like me, is trying to eliminate gender-laden pronouns from their 
speech altogether can try to rely on the word "they", but they will find themself in quite a 
pickle as soon as they try to use any reflexive verbal construction such as "the writer will 
paint themselves into a corner", and what's worse is that no matter how this person tries, 
they'll find that they can't extricate themselves gracefully, and consequently he or she will 
just flail around, making his or her sentence so awkward that s/he wis/hes s/he had never 
become conscious of these issues of sexism. Obviously, using "they" just carries you 
from the frying pan into the fire, as you have merely exchanged a male-female ambiguity 
for a singular-plural ambiguity. The only advantage to this ploy is, I suppose, that there 
is/are, to my knowledge, no group(s) actively struggling for equality between singular 
and plural. 

One possible solution is to use the plural exclusively-to refer to "biologists" or "a 
team of biologists", never just "a biologist". That way, "they" is always legitimately 
referring to a plural. However, this is a very poor solution, since it is much more vivid to 
paint a picture of a specific individual. A body can't always deal in plurals! 

Another solution, somewhat more pleasing, is to turn an impersonal situation into 
a more personal one, by using the word "you". This way, your listeners or readers are 
encouraged to put themselves in the situation, to experience it vicariously. Sometimes, 
however, this can backfire on you. Suppose you're talking about the strange effects in 
everyday life that statistical fluctuations can produce. You might write something like 
this: "One day your mailman might have so much mail to sort down at the post office that 
it's afternoon by the time she gets started on her route." At the outset, your avid reader 
Polly manufactures an image of her friendly postman sorting letters; a few moments later, 
she is told the postman is a woman. Jolt! It's not just a surface-level jolt (the collision of 
the words "mailman" and "she"), although it's that too; it's really an image-image conflict, 
since you expressly invited Polly to think of her own mailman, who happens to be a man. 
Even if you'd said "your letter carrier", Polly would still have been jolted. On the other 
hand, if you'd asked Polly to think about, say, "Henry's letter carrier", then that "she" 
would not have caused nearly as much surprise-maybe even not any. 

* * * 

In teaching my classes, I try always to use sex-neutral nouns such as "letter 
carrier" and "department head" (which I prefer to "chairperson"), and having done so, I 
try my utmost to avoid using gender- specific pronouns to refer back to them. But I have 
realized that this is largely a show put on for my own benefit. I'm not actively 
undermining any bad stereotypes simply by avoiding them. The fact that I'm not saying 
"he" where many 

Changes in Default Words and Images, Engendered by Rising Consciousness 


people would is not the sort of thing that will grab my students by the collar and shake 
there. A few people may notice my "good behavior", but those are the ones who are 
already attuned to these issues. 

So why not just use an unexpected "she" now and then? Isn't that the obvious 
thing to do? Perhaps. But in many cases, as Nozick pointed out, it may seem so 
politically motivated that it will distract more than enlighten. The problem is, once you 
start to describe some unknown receptionist (say), listeners will manufacture a fresh, 
blank mental node to represent that receptionist. By "node", I mean something like a 
mental dossier or questionnaire with a number of questions wanting immediate answers. 

Now, it is naive to suppose that a few seconds after they have manufactured their 
new node, their image of the receptionist is-or ever was -floating in a sexual limbo. It is 
next to impossible to build up more than the most fleeting, insubstantial image of a 
person without assuming he's a she, or vice versa. The instant that node is manufactured, 
unless you fill in all its blanks, it will fill them in for itself. (Imagine that each question 
..has a default answer entered in light pencil, easily erasable but to be used in case no 
other answer is provided.) And unfortunately-even for ardent feminists-those unconscious 
default assumptions are usually going to be sexist. (Feminists can be as sexist as the next 
guy!) For example, I have realized, to my dismay, that my defaults run very deep-so deep 
that, even when I say "his or her telephone", I am often nonetheless thinking "her 
telephone", and envisioning a woman at a desk. This is most disconcerting. It reveals 
that, although my self-training has succeeded quite well at the linguistic level, it hasn't 
yet fully filtered down to the imagistic level. 

As a corrective measure, I have trained myself, over the past few years, to have a 
sort of "second-order reflex" triggered by the manufacture of a new node for an unknown 
individual. What this reflex does is to make me consciously attempt to assign a female 
wherever my first-order reflex-that is, the naive reflex- would tend to automatically assign 
a male (and vice versa). I have become pretty good at this, but sometimes it is difficult or 
just plain silly to take this default-violating image seriously. For instance, when there's a 
slow truck somewhere ahead of me, holding up the traffic on a two-lane road, it is so 
tempting to say, "Why doesn't that guy pull over and let the rest of us pass him?" 
Although I won't say it that way, I also won't say, "Why doesn't he or she let us pass him 
or her?" It's not easy for me to talk about the pilot of the airliner I'm riding in in sex- 
neutral terms, because the vast majority of commercial airline pilots are men. The person 
in the seat next to me will look at me a bit strangely if I say, "He or she just made a 
beautiful landing, didn't they?" And if someone tells me that a thief has just broken into 
their car, should I say, "How much did he or she get away with?" 

* * * 

Changes in Default Words and Images, Engendered by Rising Consciousness 


So haven't I painted myselves into a corner? Am I not damned if I do, damned if I 
don't? After all, I've said that on the one hand, the passive approach of merely avoiding 
sexist usages isn't enough, but that on the other hand, the active approach of throwing in 
jolting stereotype violations can be too much. Is there no successful middle path? 

I have discovered, as a matter of fact, what I think is a rather graceful compromise 
solution to such dilemmas. Instead of dropping a nondefault gender into her lap after your 
reader has set up her default images of the people involved in the situation, simply don't 
let her get off the ground with her defaults. Upset her default assumptions explicitly from 
the word "go". 

I did this in my column on big numbers and innumeracy (Chapter 6), at the 
beginning of which I retold an old joke. Usually the storyteller begins, "A professor was 
giving a lecture on the fate of the solar system, and he said 

" Almost always, the professor is made out to be a male. This may reflect the 
sexual statistics for astronomers, but individuals aren't statistics. 

So how could this story be improved-gracefully? Well, there is a delay -not a long 
one, but still a delay-between the first mention of the professor and the pronoun "he". It's 
long enough for that default male image to get solidly-even though implicitly-implanted 
in the listener's mind. So just don't let that happen. Instead, make the professor a woman 
from the very start. By this I certainly do not mean that you should begin your story, "A 
lady professor was giving a lecture on the fate of the solar system, and . . .". Good grief! 
That's horrible! 

My solution, instead, was to give her sex away by her name. I invented the silly 
pseudo-Slavic name "Professor Bignumska", whose ending in v a' signifies that its owner 
is female. To be sure, not everyone is attuned to such linguistic subtleties, so that for 
some people it will come as a surprise when a line or two later, they read the phrase 
"according to her calculations". But at least they will get the point in the end. 

What's much worse is when people do not miss the point, but rather, reject the 
point altogether. In the published French translation of my article, my "Professor 
Bignumska" was turned into monsieur le professeur Grannombersky. Not only was the 
sex reversed, but clearly the translator had recognized what I was up to, and had 
deliberately removed all telltale traces by switching the ending to a masculine one. This 
is certainly disappointing. On the other hand, it was a relief to see that in the German 
translation, the professor's femininity remained intact: she was now called die namhafte 
Kosmogonin GroBzahlia. Here not only her name but even her title has a feminine 
ending ! 

This practice of giving some professions explicitly feminine and masculine words 
certainly makes for trouble, What do you do when talking about a mixed group of actors 
and actresses? Unless you want to be verbose, you have little choice but to refer to 
"actors". Why does a word like "waiter", with its completely noncommittal ending, have 
to refer to a male? We are hard put to come up with a neutral term. Certainly 
"waitperson" is 

Changes in Default Words and Images, Engendered by Rising Consciousness 


a strange concoction. "Server" is not so bad, and nowadays I don't object to "waitron", 
although the first time I heard it, it sounded very odd. It is nice to see "stewardess" and 
"steward" gradually getting replaced by the general title "flight attendant". 

All languages I have studied are in one way or another afflicted by these sorts of 
problems. Whereas we in English have our quaint-sounding "poetess" and "aviatrix", in 
French they have no better way of referring to a female writer or professor than une 
femme ecrivain or une femme professeur, the default male gender being built right into 
the nouns themselves. That is, ecrivain and professeur are both masculine nouns. In order 
to allow them to refer to women, you must treat them essentially as adjectives following 
(and modifying) the noun femme ("woman"). 

Another peculiarity of French is the word quelqu'un-the word for ., v _ v 'someone". 
It literally means "some one", and it requires the masculine un ("one") no matter whom it 
refers to. This means, for example, that if an unfamiliar woman knocks at the door of 
Nicole's house, and Nicole's young daughter answers the door, she is likely to yell to 
Nicole: Maman, it y a quelqu'un a la porte! ("Mommy, there's someone at the door!") It is 
impossible to "feminize" this pronoun: Maman, it y a quelqu'une a la porte. Even sillier 
would be to try to transform the impersonal it y a— there is" -into a feminine version, elle 
y a. It just rings absurd. The masculine it is as impersonal as "it" in "It is two o'clock." 
Surely no one would suggest that we say "They are two o'clock". 

In English, we have some analogous phenomena. If a pair of strangers knock at 
Paul's door, his daughter may yell to him, "Daddy, someone's at the door." She will not 
say, "Sometwo are at the door." What this illustrates is that the pronoun "someone" does 
not carry with it strong implications of singularity. It can apply to a group of people 
without sounding odd. Perhaps, analogously, quelqu'un is not as sexist at the image level 
as its surface level would suggest. But this is hard to know. 

Normally in French, to speak about a mixed or unspecified group of people, one 
uses the masculine plural pronoun its. Even a group whose membership hasn't yet been 
determined, but which stands a fair chance of including at least one male among twenty 
females, will still call for ifs. Female speakers grow up with this usage, of course, and 
follow it as naturally and unconsciously as male speakers do. Can you imagine the uproar 
if there were a serious attempt to effect a reversal of this age-old convention? How would 
men feel if the default assumption were to say elles? How would women feel? How 
would people in general feel if a group consisting of several men and one woman were 
always referred to as elles? 

Curiously enough, there are circumstances where nearly that happens. There is a 
formalistic style of writing often found in legal or contractual 

Changes in Default Words and Images, Engendered by Rising Consciousness 


documents in which the word personnes is used to refer to an abstract and unspecified 
group of people; thereafter the feminine plural pronoun elles is used to refer back to that 
noun. Since the word personne is of feminine gender (think of the Latin persona), this is 
the proper pronoun to use, even if the group being referred to is known to consist of 
males only! 

Although it is grammatically correct, when this is dragged out over a long piece 
of text it can give the reader a strange impression, since the original noun is so distant 
that the pronoun feels autonomous. One feels that the pronoun should at some point 
switch to its (and in fact, sometimes this happens). When it doesn't, it can make the 
reader uneasy. Perhaps this is just my own reaction. Perhaps it's merely the typical 
reaction of someone used to having the default pronoun for an unspecified group of 
people be masculine. Perhaps it's good for a man to experience that slight sense of 
malaise that women may feel when they see themselves referred to over and over again 
as its, simply because there is likely to be at least one male present in the group. 

We are all, of course, members of that collective group often referred to as 
"mankind", or simply "man". Even the ardent feminist Ashley Montagu once wrote a 
book called Man: His First Two Million Years. (I guess this was a long time ago.) Many 
people argue that this usage of "man" is completely distinct from the usage of "man" to 
refer to individuals, and that it is devoid of sexual implications. But many studies have 
been done that undeniably establish the contrary. David Moser once vividly pointed out 
to me the sexism of this usage. He observed that in books you will find many sentences in 
this vein: "Man has traditionally been a hunter, and he has kept his females close to the 
hearth, where they could tend his children." But you will never see such sentences as 
"Man is the only mammal who does not always suckle his young." Rather, you will see 
"Man is the only mammal in which the females do not always suckle their young." So 
much for the sexual neutrality of the generic "man". I began to look for such anomalies, 
and soon ran across the following gem in a book on sexuality: "It is unknown in what 
way Man used to make love, when he was a primitive savage millions of years ago." 

* * * 

Back to other languages. When I spent a few months in Germany working on my 
doctoral dissertation, I learned that the term for "doctoral advisor" in German is 
Doktorvater- literally, "doctor father". I immediately wondered, "What if your 
Doktorvater is a woman? Is she your Doktormutter?" Since that rang absurd to my ears, I 
thought that a better solution would be to append the feminizing suffix in, making 
Doktorvaterin- "doctor fatheress". However, it seems that a neutral term just might be 

Italian and German share an unexpected feature: In both, the respectful way of 
saying "you" is identical to the feminine singular pronoun, the only 

Changes in Default Words and Images, Engendered by Rising Consciousness 


difference being capitalization. In Italian, it's Lei; in German, Sie. Now in German the 
associated verb uses a plural ending, so that the connection to "she" is somewhat diluted, 
but in Italian, the verb remains a third-person singular verb. Thus, to compliment a man, 
you might say: Oh, come a bello Lei! ("How handsome She is!") Of course, Italians do 
not hear it this naive way. To them, it might seem equally bewildering that in English, 
adding V to a noun makes it plural whereas adding 's' to a verb makes it singular. 

One of the strangest cases is that of Chinese. In Mandarin Chinese, there has 
traditionally been just one pronoun for "he" and "she", pronounced ta- and written as in 
Figure 7-la. This character's left side consists of the "person" radical, indicating that it 
refers to a human being, sex unspecified. Curiously, however, in the linguistic reforms 
carried out in China during the past 70 years or so, a distinction has been introduced 
whereby there are now separate written forms for the single sound "td". The old character 
has been retained, but now in addition to its old meaning of "s/he", it has the new 
meaning of "he" (wouldn't you know?), while a new character has been invented for 
"she". The new character's radical is that for "woman" or "female", so the character looks 
as is shown in Figure 7 -lb. 

The new implication-not present in Chinese before this century-is that the 
"standard" type of human being is a male, and that females have to be indicated specially 
as "deviant". It remains a mystery to me why the Chinese didn't leave the old character as 
it was-a neutral pronoun-and simply manufacture two new characters, one with the 
female radical and one with the male radical, as in Figure 7-ic. (These three characters 
were created on a Vax computer using the character-designing program Han Zi, written 


FIGURE 7-1. Characters for third-person singular pronouns in Chinese. In (a), the 
generic, or neutral, pronoun, corresponding neither to "she" nor to "he ", but more to our 
usage of "they " in the singular. In (b), a new character first introduced some 70 years 
ago, meaning "she ", thus setting females apart as "special" or "deviant" (depending on 
your point of view). In (c), a character of my own invention, being the masculine 
counterpart of that in (b), thus restoring sexual symmetry to the language's pronouns. 
The left-hand element of all three characters is the radical, or semantic component, and 
in the three cases its meaning is: (a) 'person ",• (b) female'; (c) "male". Unfortunately, 
"male" is considered by pedants not to be a legitimate radical in Chinese. For purposes 
of comparison, though, my new character is about as offensive to an average Chinese 
reader as the mixing of Latin and Greek roots is to us-or, for that matter, as offensive as 
the recently constructed title "Ms. " Of course, there are English-speaking pedants who 
object to "Ms. ", whining, "But it's not an abbreviation for anything!" [Characters 
printed by the Han Zi program, developed by David B. Leake and the author at Indiana 
University. ] 

Changes in Default Words and Images, Engendered by Rising Consciousness 146 

David B. Leake and myself. More of the program's output is shown in Figure .'- 
13.) To give a corresponding (though exaggerated) example in English, 

- you imagine a political reform in which the word "person" came to mean "man", 
and for "woman" we were told to say "personess"? Actually, as I found out some time 
after inventing my new Chinese character, the character meaning "male" is not generally 
considered a radical, whereas the character meaning "female" is. A typical asymmetry, 
obviously not limited to the Occident! 

The upshot is that in China, there is no longer a truly gender-free pronoun in 
writing. Formerly, you could write a whole story without once revealing the sex of its 
participants, whereas now, your intentions to be ambiguous are themselves ambiguous. In 
the case of the joke about the cosmologist with its default option, it is interesting to 
consider which way would be better for the sake of feminism. Would you rather have the 
storyteller leave the professor's sex unspecified throughout the story, so that people's 
default options would be invoked? Or would you rather have the storyteller forced to 
commit himself? 

* * * 

One of my pet peeves is the currently popular usage of the word "guys". You 
often hear a group of people described as "guys", even when that group includes women. 
In fact, it is quite common to hear women addressing a group of other women as "you 
guys". This strikes me as very strange. However, when I have asked some people about 
it, they have adamantly maintained that, when in the plural, the word "guy" has 
completely lost all traces of masculinity. I was arguing with one woman about this, and 
she kept on saying, "It may have retained some male flavor for you, but it has none in 
most people's usage." I wasn't convinced, but nothing I could think of to say would budge 
her from her position. However, fortune proved to be on my side, because, in a last-ditch 
attempt to convince me, she said, "Why, I've even heard guys use it to refer to a bunch of 
women!" Only after saying it did she realize that she had just unwittingly undermined her 
own claim. 

Such are the subtleties of language. We are often simply too unaware of how our 
own minds work, and what we really believe. It is there for us to perceive, but too often 
people do not listen to themselves. They think they know themselves without listening to 
themselves. Along these lines, I recently heard myself saying "chesspeople" to refer to 
those wooden objects that you move about on a chessboard. It seems that my second- 
order reflex to change the suffix "man" into "person" and "men" into "people" was a little 
too strong, or at least too mechanical. After all, we do have the term "chess pieces"! 

There simply is a problem with default assumptions in our society. It is 
manifested everywhere. You find it in proverbs like "To each his own", 

Changes in Default Words and Images, Engendered by Rising Consciousness 


"Time and tide wait for no man", and so on. You hear it when little children (and adults) 
talk about squirrels and birds in their yards ("Oh, look at him running with that acorn in 
his mouth!"). You see it in animated cartoons, many of which feature some poor 
schlemiel-a sad "fall guy", a kind of schmoe with whom "everyman" can identify-whose 
fate it is to be dumped on by the world, and we all laugh with him as he is dealt one cruel 
setback after another. But why aren't there women in this role more often? Why aren't 
there more "schlemielesses"-more "fall gals"? 

One evening at some friends', I was reading a delightful children's book called 
Frog and Toad Are Friends, and I asked why Frog and Toad both had to be males. This 
brought up the general topic of female representation in children's television and movies. 
In particular, we discussed the Muppets, and we all wondered why there are so few 
sympathetic female Muppet characters. I'm a great fan of Ms. Piggy's, but still I feel that 
if she's the only major female character, something is wrong. She's hardly an ideal role 

This general kind of problem, of course, is not limited to questions of sex. It 
extends far further, to groups of any sort, large or small. The cartoons in The New 
Yorker, for instance, although innocuous in one sense, certainly do not do anything to 
promote a change in one's default assumptions about the roles .people can play. How 
often do you see a black or female executive in a New Yorker cartoon (unless, of course, 
they are there expressly because the point of the joke depends on it)? The same could be 
said for most television shows, most books, most movies ... It is hard to know how to 
combat such a huge monolithic pattern. 

There is an excellent and entertaining book that I discovered only after this 
column was nearly complete, and which could be a giant leap for humankind in the right 
direction. It is The Handbook of Nonsexist Writing, by Casey Miller and Kate Swift. I 
recommend it heartily. 

* * * 

One of the most eloquent antisexist statements I have ever come across is a talk 
delivered recently by Stanford University President Donald Kennedy at an athletes' 
banquet. Thirty years ago, Kennedy himself was an athlete at Harvard, and he reminisced 
about a similar banquet he had attended back then. He mused: 

It occurs to me to wonder: What would the reaction have been if I had predicted 
that soon .... women would run the Boston Marathon faster than it had ever been 
run by men up to that point? There would have been incredulous laughter from 
two-thirds of the room, accompanied by a little locker-room humor. 

Yet that is just what has taken place. My classmates would be astonished at 
the happening, but they would be even more astonished at the trends. If we look at 
the past ten years of world's best times in the marathon for men and women, it is 
clear that the women's mark has been dropping, over the decade, at a rate about 
seven times faster than the men's record. 

Changes in Default Words and Images, Engendered by Rising Consciousness 


The case of swimming is even more astonishing. Kennedy recalls that in his day, 
the Harvard and Yale teams were at the very pinnacle of the nation in swimming, and 
both came undefeated into their traditional rival meet at the end of that season. 

What would have happened if you had put this year's Stanford women into that 
pool? Humiliation is what, just to give you a sample, seven current Stanford women 
would have beaten my friend Dave Hedberg, Harvard's great sprint freestyler, and 
all the Yalies in the 100. The Stanford women would have swept the 200-yard 
backstroke and breaststroke, and won all the other events contested. 

In the 400-yard freestyle relay, there would have been a 10-second wait 
between Stanford's touch and the first man to arrive at the finish. Do you know how 
long ten seconds is? Can you imagine that crowd in Payne Whitney Gymnasium, 
seeing a team of girls line up against the two best freestyle relay groups in the East, 
expecting the unexpected, and then having to wait this long -for the men to get 

Kennedy paints a hilarious picture, but of course his point is dead serious: 

I ask you: If conventional wisdom about women's capacity can be so thoroughly 
decimated in this most traditional area of male superiority, how can we possibly 
cling to the illusions we have about them in other areas? 

What, in short, is the lesson to be drawn from the emerging athletic equality 
of women? I think it is that those who make all the other, less objectively verifiable 
assumptions about female limitations would do well to discard them. They belong 
in the same dusty closet with the notion that modern ballplayers couldn't carry Ty 
Cobb's spikes and the myth that blacks can't play quarterback. Whether it is vicious 
or incapacitating or merely quaint, nonsense is nonsense. And it dies hard. 

'Tis a point to ponder. In the meantime: 


Post Scriptum. 

Since writing this column, I have continued to ponder these issues with great 
intensity. And I must say, the more I ponder, the more prickly and confusing the whole 
matter becomes. I have found appalling unawareness 

Changes in Default Words and Images, Engendered by Rising Consciousness 


of the problem all around me-in friends, colleagues, students, on radio and television, in 
magazines, books, films, and so on. The New Fork Times is one of the worst offenders. 
You can pick it up any day and see prominent women referred to as "chairman" or 
"congressman". Even more flagrantly obnoxious is when they refer to prominent 
feminists by titles that feminism repudiates. For example, a long article on Judy 
Goldsmith (head of NOW, the National Organization for Women) repeatedly referred to 
her as "Mrs. Goldsmith". The editors' excuse is: 

Publications vary in tone, and the titles they affix to names will differ accordingly. 
The Times clings to traditional ones (Mrs., .Hiss, and Dr., for example). As for Ms. 
-that useful business-letter coinage-we reconsider it from time to time; to our ear, it 
still sounds too contrived for news writing. 

As long as they stick with the old terms, they will sound increasingly reactionary 
and increasingly silly. 

Perhaps what bothers me the most is when I hear newscasters on the radio - 
especially public radio-using blatantly sexist terms when it would be so easy to avoid 
them. Female announcers are almost uniformly as sexist as male announcers. A typical 
example is the female newscaster on National Public Radio who spoke of "the employer 
who pays his employees on a weekly basis" and "the employee who is concerned about 
his tax return", when both employer and employee were completely hypothetical 
personages, thus without either gender. Or the male newscaster who described the Pope 
in Warsaw as "surrounded by throngs of his countrymen". Or the female newscaster who 
said, "Imagine I'm a worker and I'm on my deathbed and I have no money to support my 
wife and kids ..." Of all people, newscasters should know better. 

I attended a lecture in which a famous psychologist uttered the following 
sentence, verbatim: "What the plain man would like, as he comes into an undergraduate 
psychology course, as a man or a woman, is that he would find out something about 
emotions." Time and again, I have observed people lecturing in public who, like this 
psychologist, seem to feel a mild discomfort with generic "he" and generic "man", and 
who therefore try to compensate, every once in a while, for their constant usage of such 
terms. After, say, five uses of "he" in describing a hypothetical scientist, they will throw 
in a meek "he or she" (and perhaps give an embarrassed little chuckle); then, having 
pacified their guilty conscience, they will go back to "he" and other sexist usages for a 
while, until the guilt juices have built up enough again to trigger one more token 
nonsexist usage. 

This is not progress, in my opinion. In fact, in some ways, it is retrograde motion, 
and damages the cause of nonsexist language. The problem is that these people are 
simultaneously showing that they recognize that "he" is not truly generic and yet 
continuing to use it as if it were. They are thereby, at one and the same time, increasing 
other people's recognition of the sham of considering "he" as a generic, and yet 
reinforcing the old convention of using it anyway. It's a bad bind. 

Changes in Default Words and Images, Engendered by Rising Consciousness 


In case anybody needs to be convinced that supposed generics such as "he" and 
"man" are not neutral in people's minds, they should reflect on the following findings. I 
quote from the chapter called "Who Is Man?" in Words and Women, an earlier book by 
Casey Miller and Kate Swift: 

In 1972 two sociologists at Drake University, Joseph Schneider and Sally 
Hacker, decided to test the hypothesis that man is generally understood to embrace 
woman. Some three hundred college students were asked to select from magazines 
and newspapers a variety of pictures that would appropriately illustrate the different 
chapters of a sociology textbook being prepared for publication. Half the students 
were assigned chapter headings like "Social Man", "Industrial Man", and "Political 
Man". The other half were given different but corresponding headings like 
"Society", "Industrial Life", and "Political Behavior". Analysis of the pictures 
selected revealed that in the minds of students of both sexes use of the word man 
evoked, to a statistically significant degree, images of males only-filtering out 
recognition of women's participation in these major areas of life-whereas the 
corresponding headings without man evoked images of both males and females. In 
some instances the differences reached magnitudes of 30 to 40 per cent. The 
authors concluded, "This is rather convincing evidence that when you use the word 
man generically, people do tend to think male, and tend not to think female." 

Subsequent experiments along the same lines but involving schoolchildren rather than 
college students are then described by Miller and Swift. The results are much the same. 
No matter how generic "man" is claimed to be, there is a residual trace, a subliminal 
connotation of higher probability of being male than female. 

* * * 

Shortly after this column came out, I hit upon a way of describing one of the 
problems of sexist language. I call it the slippery slope of sexism. The idea is very 
simple. When a generic term and a "marked" term (i.e., a sex-specific term) coincide, 
there is a possibility of mental blurring on the part of 

listeners and even on the part of the speaker. Some of the connotations of the 
generic will automatically rub off even when the specific is meant, and conversely. The 
example of "Industrial Man" illustrates one half of this statement, where a trace of male 
imagery rubs off even when no gender is 

intended. The reverse is an equally common phenomenon; an example would be 
when a newscaster speaks of "the four- man crew of next month's space shuttle flight". It 
may be that all four are actually males, in which case the usage would be precise. Or it 
may be that there is a woman among them, 

in which case "man" would be functioning generically (supposedly). But if you're 
just listening to the news, and you don't know whether a woman is among the four, what 
are you supposed to do? 

Some listeners will automatically envision four males, but others, remembering 
the existence of female astronauts, will leave room in their minds for at least one woman 
potentially in the crew. Now, the newscaster 

Changes in Default Words and Images, Engendered by Rising Consciousness 


FIGURE 7-2. The "slippery slope of sexism", illustrated. In each case in (a), a 
supposed generic (i.e., gender-neutral term) is shown above its two marked 
particularization (i.e., gender specific terms). However, the masculine and generic 
coincide, which fact is symbolized by the thick heavy line joining them-the slippery slope, 
along which connotations slosh back and forth, unimpeded. The "most favored sex" 
status is thereby accorded the masculine term. In (b), the slippery slopes are replaced by 
true gender fairness, in which generics are unambiguously generic 

may know full well that this flight consists of males only. In fact, she may have chosen 
the phrase "four-man crew" quite deliberately, in order to let you know that no woman is 
included. For her, "man" may be marked. On the other hand, she may not have given it a 
second thought; for her, "man" may be unmarked. But how are you to know? The 
problem is right there: the slippery slope. Connotations slip back and forth very shiftily, 
and totally 

Changes in Default Words and Images, Engendered by Rising Consciousness 


and marked terms unambiguously marked. Still, it is surprising how often it is totally 
irrelevant which sex is involved. Do we need-or want-to be able to say such things as, 
"Her actions were heroinic "? Who cares if a hero is male or female, as long as what 
they did is heroic? The same can be said about actors, sculptors, and a hostess of other 
terms. The best fix for that kind of slippery slope is simply to drop the marked term, 
making all three coincide in a felicitously ambisexual menage a trois. 

beneath our usual level of awareness-especially (though not exclusively) at the interface 
between two people whose usages differ. 

Let me be a little more precise about the slippery slope. I have chosen a number 
of salient examples and put them in Figure 7-2. Each slippery slope involves a little 
triangle, at the apex of which is a supposed generic, and the bottom two corners of which 
consist of oppositely marked terms. Along one 

Changes in Default Words and Images, Engendered by Rising Consciousness 


side of each triangle runs a diagonal line-the dreaded slippery slope itself. Along that line, 
connotations slosh back and forth freely in the minds of listeners and speakers and 
readers and writers. And it all happens at a completely unconscious level, in exactly the 
same way as a poet's choice of a word subliminally evokes dozens of subtle flavors 
without anyone's quite understanding how it happens. This wonderful fluid magic of 
poetry is not quite so wonderful when it imbues one word with all sorts of properties that 
it should not have. 

The essence of the typical slippery slope is this: it establishes a firm "handshake" 
between the generic and the masculine, in such a way that the feminine term is left out in 
the cold. The masculine inherits the abstract power of the generic, and the generic inherits 
the power that comes with specific imagery. Here is an example of the generic -benefits 
from-speck effect: "Man forging his destiny". Who can resist thinking of some kind of 
huge mythical brute of a guy hacking his way forward in a jungle or otherwise making 
progress? Does the image of a woman even come close to getting evoked? I seriously 
doubt it. And now for the converse, consider these gems: "Kennedy was a man for all 
seasons." "Feynman is the world's smartest man." "Only a man with powerful esthetic 
intuition could have created the general theory of relativity." "Few men have done more 
for science than Stephen Hawking." "Leopold and Loeb wanted to test the idea that a 
perfect crime might be committed by men of sufficient intelligence." Why "man" and 
"men", here? The answer is: to take advantage of the specific -benefits -from -generic 
effect. The power of the word "man" emanates largely from its close connection with the 
mythical "ideal man": Man the Thinker, Man the Mover, Man whose Best Friend is Dog. 

* * * 

Another way of looking at the slippery-slope effect is to focus on the single 
isolated corner of the triangle. At first it might seem as if it makes women somehow more 
distinguished. How nice! But in fact what it does is mark them as odd. They are 
considered nonstandard; the standard case is presumed not to be a woman. In other 
words, women have to fight their way back into imagery as just-plain people. Here are 
some examples to make the point. 

When I learned French in school, the idea that masculine pronouns covered 
groups of mixed sex seemed perfectly natural, logical, and unremarkable to me. Much 
later, that usage came to seem very biased and bizarre to me. However, very recently, I 
was a bit surprised to catch myself falling into the same trap in different guise. I was 
perusing a multilingual dictionary, and noticed that instead of the usual m. and f. to 
indicate noun genders, they had opted for V and '-'. Which way, do you suspect? Right! 
And it seemed just right to me, too-until I realized how dumb I was being. 

Heard on the radio news: "A woman motorist is being held after officials 

Changes in Default Words and Images, Engendered by Rising Consciousness 


observed her to be driving erratically near the White House." Why say "woman 
motorist"? Would you say "man motorist" if it had been a male? Why is gender, and 
gender alone, such a crucial variable? 

Think of the street sign that shows a man in silhouette walking across the street, 
intended to tell you "Pedestrian Crossing" in sign language. What if it were recognizably 
a woman walking across the street? Since it violates the standard default assumption that 
people have for people, it would immediately arouse a kind of suspicion: "Hmm . . . 
'Women Crossing'? Is there a nunnery around here?" This would be the reaction not 
merely of dyed-in-the-wool sexists, but of anyone who grew up in our society, where 
women are portrayed-not deliberately or consciously, but ubiquitously and subliminally- 
as "exceptions". 

If I write, "In the nineteenth century, the kings of nonsense were Edward Lear and 
Lewis Carroll", people will with no trouble get the message that those two men were the 
best of all nonsense writers at that time. But now consider what happens if I write, "The 
queen of twentieth-century nonsense is Gertrude Stein". The implication is unequivocal: 
Gertrude Stein is, among female writers of nonsense, the best. It leaves completely open 
her ranking relative to males. She might be way down the list! Now isn't this 
preposterous? Why is our language so asymmetric? This is hardly chivalry -it is utter 

A remarkable and insidious slippery-slope phenomenon is what has happened 
recently to formerly all-women's colleges that were paired with formerly all-men's 
colleges, such as Pembroke and Brown, Radcliffe and Harvard, and so on. As the two 
merged, the women's school gradually faded out of the picture. Do men now go to 
Radcliffe or Pembroke or Douglass? Good God, no! But women are proud to go to 
Harvard and Brown and Rutgers. Sometimes, the women's college keeps some status 
within the larger unit, but that larger unit is always named after the men's college. In a 
weird twist on this theme, Stanford University has no sororities at all-but guess what 
kinds of people it now allows in its fraternities! 

Another pernicious slippery slope has arisen quite recently. That is the, one 
involving "gay" as both masculine and generic, and "Lesbian" as feminine. What is 
problematic here is that some people are very conscious of the problem, and refuse to use 
"gay" as a generic, replacing it with "gay or Lesbian" or "homosexual". (Thus there are 
many "Gay and Lesbian Associations".) Other people, however, have eagerly latched 
onto "gay" as a generic and use it freely that way, referring to "gay people", "gay men", 
"gay women", "gay rights", and so on. As a consequence, the word "gay" has a much 
broader flavor to it than does "Lesbian". What does "the San Francisco gay community" 
conjure up? Now replace "gay" by "Lesbian" and try it again. The former image probably 
is capable of flitting between that of both sexes and that of men only, while the latter is 
certainly restricted to women. The point is simply that men are made to seem standard, 
ordinary, somehow proper; women as special, deviant, exceptional. That is the essence of 
the slippery slope. 

Changes in Default Words and Images, Engendered by Rising Consciousness 


* * * 

Part of the problem in sexism is how deeply ingrained it is. I have noticed a 
disturbing fact about my observation of language and related phenomena: whenever I 
encounter a particularly blatant example, I write it down joyfully, and say to friends, "r 
just heard a great example of sexism!" Now, why is it good to find a glaring example of 
something bad? Actually, the answer is very simple. You need outrageously clear 
examples if you want to convince many people that there is a problem worth taking at all 

I was very fortunate to meet the philosopher and feminist Joan Straumanis shortly 
after my column on sexism appeared. We had a lot to talk over, and particularly enjoyed 
swapping stories of the sort that make you groan and say, "Isn't that great?"-meaning, of 
course, "How sickening!" Here's one that happened to her. Her husband was in her 
university office one day, and wanted to make a long-distance phone call. He dialed '0', 
and a female operator answered. She asked if he was a faculty member. He said no, and 
she said, "Only faculty members can make calls on these phones." He replied, "My wife 
is a faculty member. She's in the next room-Ill get her." The operator snapped back, "Oh, 
no-wives can't use these phones!" 

Another true story that I got from Joan Straumanis, perhaps more provocative and 
fascinating, is this one. A group of parents arranged a tour of a hospital for a group of 
twenty children: ten boys and ten girls. At the end of the tour, hospital officials presented 
each child with a cap: doctors' caps for the boys, nurses' caps for the girls. The parents, 
outraged at this sexism, went to see the hospital administration. They were promised that 
in the future, this would be corrected. 'The next year, a similar tour was arranged, and at 
the end, the parents came by to pick up their children. What did they find, but the exact 
same thing-all the boys had on doctors' hats, all the girls had on nurses' hats! Steaming, 
they stormed up to the director's office and demanded an explanation. The director gently 
told them, "But it was totally different this year: we offered them all whichever hat they 
wanted. " 

David Moser, ever an alert observer of the language around him, had tuned into a 
radio talk show one night, and heard an elderly woman voicing outrage at the mild 
sentence of two men who had murdered a three-year-old girl. The woman said, "Those 
two men should get the gas chamber for sure. I think it's terrible what they did! Who 
knows what that little girl could have grown up to become? Why, she could have been 
the mother of the next great composer!" 'The idea that that little girl might have grown up 
to be the next great composer undoubtedly never entered the woman's mind. Still, her 
remark was not consciously sexist and I find it strangely touching, reminiscent of a 
quieter era where gender roles were obvious and largely unquestioned, an era when many 
people felt safe and secure in their socially defined niches. But those times are gone, and 
we must now move ahead with consciousness raised high. 

Changes in Default Words and Images, Engendered by Rising Consciousness 


In one conversation I was in, a man connected with a publisher-let's call it 
"Freeperson"-said to me, "Aldrich was the liaison between the Freeperson boys and we- 
er, I mean us. " What amused me so much was his instant detection and correction of a 
syntactic error, yet no awareness of his more serious semantic error. Isn't that great? 

* * * 

I would not be being totally honest if I did not admit that occasionally, despite my 
apparent confidence in what I have been saying, I experience serious doubts about how 
deeply negative the impact of sexist language upon minds is. I must emphasize that I 
reject the Sapir-Whorf hypothesis about language molding perception and culture. I think 
the flow of causality is almost entirely in the other direction. And I am truly impressed 
with the plasticity of the human mind, with its ability to replace default assumptions at 
the drop of a hat with alternatives-even wildly unusual ones. People may assume that an 
unspecified orchestra conductor is male-but if they learn it is a woman, they immediately 
absorb that piece of knowledge without flinching. A barber I recently went to said to me, 
"They treated me like a king." This perhaps wouldn't surprise you-unless you knew that 
she was a woman. So why didn't she say "like a queen"? And David Moser reports that a 
woman he knows told him, "That family treated me just like a son!" Now why didn't she 
say "like a daughter"? I suppose it is because "treat someone like a king" and "treat 
someone like a son" are to some extent stock phrases in English, and despite their 
apparent sexism, perhaps they are actually quite neutral in their deep imagery. I am not 
saying I know; but I am saying I wonder, sometimes. 

I also have to give pause to the following fact: Marina Yaguello, a professor of 
linguistics at the University of Paris and the author of the strongly feminist book Les 
mots et les femmes ("Words and Women"), an extended study of sexism in the French 
language, more recently wrote another book about general linguistics for the lay public, 
called Alice au pays du langage ("Alice in Language-Land"). In this book, Yaguello 
makes no effort to avoid all the sexist traps of the French language that she took so many 
pains to spell out in her previous book. To say "all people", she writes tons les hommes 
("all men"); to refer to a generic young child, she says le jeune enfant (using the 
masculine article). Perhaps what flabbergasted me most was that when she wanted to 
refer to a female child, instead of writing une enfant (with "child" feminine, which is 
perfectly possible), she wrote un enfant du sexe feminin-"a child of the feminine sex", 
where "child" itself is masculine! If even a staunch feminist can reconcile herself to such 
blatantly sexist usages, feeling that there are deeper truths than what appears on the 
surface, I guess I have to sit back and think. 

This does not prevent me from feeling that we live in a sexist society whose most 
accurate reflection is provided for us in our language, and from collecting specimens to 
document that sexism as clearly as possible. It seems 

Changes in Default Words and Images, Engendered by Rising Consciousness 


to me that the state of our language provides a kind of barometer of the state of our 
society. Trying to change society through changing language may be a case of trying to 
get the tail to wag the dog, but one way of getting people to wake up to the problem is to 
point to language, a clearly observable phenomenon. 

The nonsexist goal that I would advocate is not that every profession should 
consist of half males and half females. To tell the truth, I suspect that even if we reached 
such a balanced state some day, it would not be an equilibrium state-the percentages 
would slide. It is just very unlikely, it seems to me, that males and females are that 
symmetric. But that is not at all the point of a push towards sex-neutral language. The 
purpose of eliminating biases and preconceptions is to open the door wide for people of 
either sex in any line of work or play. Symmetric opportunity, not necessarily symmetric 
distribution, is the goal that we should seek. 

* * * 

I was provoked to write the following piece about a year after the column on 
sexism came out. It came about this way. One evening I had a very lively conversation at 
dinner with a group of people who thought of the problem of sexist language as no more 
than that: dinner-table conversation. Despite all the arguments I put forth, I just couldn't 
convince them there was anything worth taking seriously there. The next morning I woke 
up and heard two most interesting pieces of news on the radio: a black Miss America had 
been picked, and a black man was going to run for president. Both of these violated 
default assumptions, and it set my mind going along two parallel tracks at once: What if 
people's default assumptions were violated in all sorts of ways both sexually and racially? 
And then I started letting the default violations cross all sorts of lines, and pretty soon I 
was coming up with an image of a totally different society, one in which ... Well, I'll just 
let you read it. 

Changes in Default Words and Images, Engendered by Rising Consciousness 



A Person Paper 
on Purity in Language 

by William Satire (alias Douglas R. Hofstadter) 
September, 1983 

Its high time someone blew the whistle on all the silly prattle about revamping our 
language to suit the purposes of certain political fanatics. You know what I'm talking 
about-those who accuse speakers of English of what they call "racism". This awkward 
neologism, constructed by analogy with the well-established term "sexism", does not sit 
well in the ears, if I may mix my metaphors. But let us grant that in our society there may 
be injustices here and there in the treatment of either race from time to time, and let us 
even grant these people their terms "racism" and "racist". How valid, however, are the 
claims of the self -proclaimed "black libbers", or "negrists "-those who would radically 
change our language in order to "liberate" us poor dupes from its supposed racist bias? 

Most of the clamor, as you certainly know by now, revolves around the age-old 
usage of the noun "white" and words built from it, such as chairwhite, mailwhite, 
repairwhite, clergywhite, rniddlewhite, French white, forewhite, whitepower, 
whiteslaughter, oneupswhiteship, straw white, whitehandle, and so on. The negrists claim 
that using the word "white", either on its own or as a component, to talk about all the 
members of the human species is somehow degrading to blacks and reinforces racism. 
Therefore the libbers propose that we substitute "person" everywhere where "white" now 
occurs. Sensitive speakers of our secretary tongue of course find this preposterous. There 
is great beauty to a phrase such as "All whites are created equal." Our forebosses who 
framed the Declaration of Independence well understood the poetry of our language. 
Think how ugly it would be to say "All persons are created equal.", or "All whites and 
blacks are created equal." Besides, as any schoolwhitey can tell you, such phrases are 
redundant. In most contexts, it is self-evident when "white" is being used in an inclusive 
sense, in which case it subsumes members of the darker race just as much as fairskins. 

There is nothing denigrating to black people in being subsumed under 

A Person Paper on Purity in Language 


the rubric "white"-no more than under the rubric "person". After all, white is a mixture of 
all the colors of the rainbow, including black. Used inclusively, the word "white" has no 
connotations whatsoever of race. Yet many people are hung up on this point. A prime 
example is Abraham Moses, one of the more vocal spokeswhites for making such a shift. 
For years, Niss Moses, authoroon of the well-known negrist tracts A Handbook of 
Vonracist Writing and Words and Blacks, has had nothing better to do than go around the 
country making speeches advocating the downfall of "racist language" that ble objects to. 
But when you analyze bier objections, you find they all fall apart at the seams. Niss 
Moses says that words like "chairwhite" suggest to people-most especially 
impressionable young whiteys and blackeysthat all chairwhites belong to the white race. 
How absurd! It is quite obvious, for instance, that the chairwhite of the League of Black 
Voters is going to be a black, not a white. Nobody need think twice about it. As a matter 
of fact, the suffix "white" is usually not pronounced with a long T as in the noun "white", 
but like "wit", as in the terms saleswhite, freshwhite, penwhiteship, first basewhite, and 
so on. It's just a simple and useful component in building race-neutral words. 

But Niss Moses would have you sit up and start hollering "Racism!" In fact, Niss 
Moses sees evidence of racism under every stone. Ble has written a famous article, in 
which ble vehemently objects to the immortal and poetic words of the first white on the 
moon, Captain Nellie Strongarm. If you will recall, whis words were: "One small step for 
a white, a giant step for whitekind." This noble sentiment is anything but racist; it is 
simply a celebration of a glorious moment in the history of White. 

Another of Niss Moses' shrill objections is to the age-old differentiation of whites 
from blacks by the third-person pronouns "whe" and "ble". Ble promotes an absurd 
notion: that what we really need in English is a single pronoun covering both races. 
Numerous suggestions have been made, such as "pe", "tey", and others. These are all 
repugnant to the nature of the English language, as the average white in the street will 
testify, even if whe has no linguistic training whatsoever. Then there are advocates of 
usages such as "whe or ble", "whis or bier", and so forth. This makes for monstrosities 
such as the sentence "When the next President takes office, whe or ble will have to 
choose whis or bier cabinet with great care, for whe or ble would not want to offend any 
minorities." Contrast this with the spare elegance of the normal way of putting it, and 
there is no question which way we ought to speak. There are, of course, some yapping 
black libbers who advocate writing "bl/whe" everywhere, which, aside from looking 
terrible, has no reasonable pronunciation. Shall we say "blooey" all the time when we 
simply mean "whe"? Who wants to sound like a white with a chronic sneeze? 

* * * 

A Person Paper on Purity in Language 


One of the more hilarious suggestions made by the squawkers for this point of 
view is to abandon the natural distinction along racial lines, and to replace it with a 
highly unnatural one along sexual lines. One such suggestion-emanating, no doubt, from 
the mind of a madwhite-would have us say "he" for male whites (and blacks) and "she" 
for female whites (and blacks). Can you imagine the outrage with which sensible folk of 
either sex would greet this "modest proposal"? 

Another suggestion is that the plural pronoun "they" be used in place of the 
inclusive "whe". This would turn the charming proverb "Whe who laughs last, laughs 
best" into the bizarre concoction "They who laughs last, laughs best". As if anyone in 
whis right mind could have thought that the original proverb applied only to the white 
race! No, we don't need a new pronoun to "liberate" our minds. That's the lazy white's 
way of solving the pseudo-problem of racism. In any case, it's ungrammatical. The 
pronoun "they" is a plural pronoun, and it grates on the civilized ear to hear it used to 
denote only one person. Such a usage, if adopted, would merely promote illiteracy and 
accelerate the already scandalously rapid nosedive of the average intelligence level in our 

Niss Moses would have us totally revamp the English language to suit bier 
purposes. If, for instance, we are to substitute "person" for "white", where are we to stop? 
If we were to follow Niss Moses' ideas to their logical conclusion, we would have to 
conclude that ble would like to see small blackeys and whiteys playing the game of 
"Hangperson" and reading the story of "Snow Person and the Seven Dwarfs". And would 
ble have us rewrite history to say, "Don't shoot until you see the persons of their eyes!"? 
Will pundits and politicians henceforth issue person papers? Will we now have egg yolks 
and egg persons? And pledge allegiance to the good old Red, Person, and Blue? Will we 
sing, "I'm dreaming of a person Christmas"? Say of a frightened white, "Whe's person as 
a sheet!"? Lament the increase of person-collar crime? Thrill to the chirping of 
bobpersons in our gardens? Ask a friend to person the table while we go visit the persons' 
room? Come off it, Niss Moses-don't personwash our language! , 

What conceivable harm is there in such beloved phrases as "No white is an 
island", "Dog is white's best friend", or "White's inhumanity to white"? Who would 
revise such classic book titles as Bronob Jacowski's The Ascent of White or Eric Steeple 
Bell's Whites of Mathematics? Did the poet who wrote "The best-laid plans of mice and 
whites gang aft agley" believe that blacks' plans gang ne'er agley? Surely not! Such 
phrases are simply metaphors; everyone can see beyond that. Whe who interprets them as 
reinforcing racism must have a perverse desire to feel oppressed. "Personhandling" the 
language is a habit that not only Niss Moses but quite a few others have taken up 
recently. For instance, Nrs. Delilah Buford has urged that we drop the useful distinction 
between "Niss" and "Nrs." (which, as everybody knows, is pronounced "Nissiz", the 
reason for which nobody knows!). Bier argument is that there is no need for the public to 
know whether a black is 

A Person Paper on Purity in Language 


employed or not. Need is, of course, not the point. Ble conveniently sidesteps the fact that 
there is a tradition in our society of calling unemployed blacks "Niss" and employed 
blacks "Nrs." Most blacks-in fact, the vast majority-prefer it that way. They want the 
world to know what their employment status is, and for good reason. Unemployed blacks 
want prospective employers to know they are available, without having to ask 
embarrassing questions. Likewise, employed blacks are proud of having found a job, and 
wish to let the world know they are employed. This distinction provides a sense of 
security to all involved, in that everyone knows where ble fits into the scheme of things. 

But Nrs. Buford refuses to recognize this simple truth. Instead, ble shiftily turns 
the argument into one about whites, asking why it is that whites are universally addressed 
as "Master", without any differentiation between employed and unemployed ones. The 
answer, of course, is that in Anerica and other Northern societies, we set little store by the 
employment status of whites. Nrs. Buford can do little to change that reality, for it seems 
to be tied to innate biological differences between whites and blacks. Many white-years 
of research, in fact, have gone into trying to understand why it is that employment status 
matters so much to blacks, yet relatively little to whites. It is true that both races have a 
longer life expectancy if employed, but of course people often do not act so as to 
maximize their life expectancy. So far, it remains a mystery. In any case, whites and 
blacks clearly have different constitutional inclinations, and different goals in life. And so 
I say, 

Vive na difference! 

* * * 

As for Nrs. Buford's suggestion that both "Niss" and "Nrs." be unified into the 
single form of address "Ns." (supposed to rhyme with "fizz"), all I have to say is, it is 
arbitrary and clearly a thousand years ahead of its time. Mind you, this "Ns." is an 
abbreviation concocted out of thin air: it stands for absolutely nothing. Who ever heard of 
such toying with language? And while we're on this subject, have you yet run across the 
recently founded Ns. magazine, dedicated to the concerns of the "liberated black"? It's 
sure to attract the attention of a trendy band of black airheads for a little while, but 
serious blacks surely will see through its thin veneer of slick, glossy Madison Avenue 
approaches to life. 

Nrs. Buford also finds it insultingly asymmetric that when a black is employed by 
a white, ble changes bier firmly name to whis firmly name. But what's so bad about that? 
Every firm's core consists of a boss (whis job is to make sure long-term policies are well 
charted out) and a secretary (bier job is to keep corporate affairs running smoothly on a 
day-to-day basis). They are both equally important and vital to the firm's success. No one 
disputes this. Beyond them there may of course be other firmly members. Now it's quite 
obvious that all members of a given firm should bear the same 

A Person Paper on Purity in Language 


name-otherwise, what are you going to call the firm's products? And since it would be 
nonsense for the boss to change whis name, it falls to the secretary to change bier name. 
Logic, not racism, dictates this simple convention. 

What puzzles me the most is when people cut off their noses to spite their faces. 
Such is the case with the time-honored colored suffixes "oon" and "roon", found in 
familiar words such as ambassadroon, stewardoon, and sculptroon. Most blacks find it 
natural and sensible to add those suffixes onto nouns such as "aviator" or "waiter". A 
black who flies an airplane may proudly proclaim, "I'm an aviatroon!" But it would sound 
silly, if not ridiculous, for a black to say of blerself, "I work as a waiter." On the other 
hand, who could object to my saying that the debonair Pidney Soitier is a great actroon, 
or that the. hilarious Quill Bosby is a great comedioon? You guessed it-authoroons such 
as Niss Mildred Hempsley and Nrs. Charles White, both of whom angrily reject the 
appellation "authoroon", deep though its roots are in our language. Nrs. White, perhaps 
one of the finest poetoons of our day, for some reason insists on being known as a "poet". 
It leads one to wonder, is Nrs. White ashamed of being black, perhaps? I should hope not. 
White needs black, and black needs white, and neither race should feel ashamed. 

Some extreme negrists object to being treated with politeness and courtesy by 
whites. For example, they reject the traditional notion of "Negroes first", preferring to 
open doors for themselves, claiming that having doors opened for them suggests 
implicitly that society considers them inferior. Well, would they have it the other way? 
Would these incorrigible grousers prefer to open doors for whites? What do blacks want? 

* * * 

Another unlikely word has recently become a subject of controversy: "blackey". 
This is, of course, the ordinary term for black children (including (een-agers), and by 
affectionate extension it is often applied to older blacks. Yet, incredible though it seems, 
many blacks-even teen-age blackeys-now claim to have had their "consciousness raised", 
and are voguishly skittish about being called "blackeys". Yet it's as old as the hills for 
blacks employed in the same office to refer to themselves as "the office blackeys". And 
for their boss to call them "m), blackeys" helps make the ambiance more relaxed and 
comfy for all. It's hardly the mortal insult that libbers claim it to be. Fortunately, most 
blacks are sensible people and realize that mere words do not demean; they know it's how 
they are used that counts. Most of the time, calling a black-especially an older black-a 
"blackey" is a thoughtful way of complimenting bier, making bier feel young, fresh, and 
hireable again. Lord knows, I certainly wouldn't object if someone told me that I looked 
whiteyish these days ! 

Many young blackeys go through a stage of wishing they had been born 

A Person Paper on Purity in Language 


white. Perhaps this is due to popular television shows like Superwhite and Batwhite, but 
it doesn't really matter. It is perfectly normal and healthy. Many of our most successful 
blacks were once tomwhiteys and feel no shame about it. Why should they? Frankly, I 
think tomwhiteys are often the cutest little blackeys-but that's just my opinion. In any 
case, Niss Moses (once again) raises a ruckus on this score, asking why we don't have a 
corresponding word for young whiteys who play blackeys' games and generally manifest 
a desire to be black. Well, Niss Moses, if this were a common phenomenon, we most 
assuredly would have such a word, but it just happens not to be. Who can say why? But 
given that tomwhiteys are a dime a dozen, it's nice to have a word for them. The lesson is 
that White must learn to fit language to reality; White cannot manipulate the world by 
manipulating mere words. An elementary lesson, to be sure, but for some reason Niss 
Moses and others of bier ilk resist learning it. 

Shifting from the ridiculous to the sublime, let us consider the Holy Bible. The 
Good Book is of course the source of some of the most beautiful language and profound 
imagery to be found anywhere. And who is the central character of the Bible? I am sure I 
need hardly remind you; it is God. As everyone knows, Whe is male and white, and that 
is an indisputable fact. But have you heard the latest joke promulgated by tasteless 
negrists? It is said that one of them died and went to Heaven and then returned. What did 
ble report? "I have seen God, and guess what? Ble's female!" Can anyone say that this is 
not blasphemy of the highest order? It just goes to show that some people will stoop to 
any depths in order to shock. I have shared this "joke" with a number of friends of mine 
(including several blacks, by the way), and, to a white, they have agreed that it sickens 
them to the core to see Our Lord so shabbily mocked. Some things are just in bad taste, 
and there are no two ways about it. It is scum like this who are responsible for some of 
the great problems in our society today, I am sorry to say. 

* * * 

Well, all of this is just another skirmish in the age-old Battle of the Races, I guess, 
and we shouldn't take it too seriously. I am reminded of words spoken by the great British 
philosopher Alfred West Malehead in whis commencement address to my alma 
secretaria, the University of North Virginia: "To enrich the language of whites is, 
certainly, to enlarge the range of their ideas." I agree with this admirable sentiment 
wholeheartedly. I would merely point out to the overzealous that there are some 
extravagant notions about language that should be recognized for what they are: cheap 
attempts to let dogmatic, narrow minds enforce their views on the speakers lucky enough 
to have inherited the richest, most beautiful and flexible language on earth, a language 
whose traditions run back through the centuries to such deathless poets as Milton, 
Shakespeare, Wordsworth, Keats, Walt Whitwhite, and so many others ... Our language 
owes an 

A Person Paper on Purity in Language 


incalculable debt to these whites for their clarity of vision and expression, and if the 
shallow minds of bandwagon jumping negrists succeed in destroying this precious 
heritage for all whites of good will, that will be, without any doubt, a truly female day in 
the history of Northern White. 

Post Scriptum. 

Perhaps this piece shocks you. It is meant to. The entire point of it is to use 
something that we find shocking as leverage to illustrate the fact that something that we 
usually .close our eyes to is also very shocking. The most effective way I know to do so 
is to develop an extended analogy with something known as shocking and reprehensible. 
Racism is that thing, in this case. I am happy with this piece, despite-but also because of- 
its shock value. I think it makes its point better than any factual article could. As a friend 
of mine said, "It makes you so uncomfortable that you can't ignore it." I admit that 
rereading it makes even me, the author, uncomfortable! 

Numerous friends have warned me that in publishing this piece I am taking a 
serious risk of earning myself a reputation as a terrible racist. I guess I cannot truly 
believe that anyone would see this piece that way. To misperceive it this way would be 
like calling someone a vicious racist for telling other people "The word nigger' is 
extremely offensive." If allusions to racism, especially for the purpose of satirizing 
racism and its cousins, are confused with racism itself, then I think it is time to stop 

Some people have asked me if to write this piece, I simply took a genuine 
William Safire column (appearing weekly in the New York Times Magazine under the 
title "On Language") and "fiddled" with it. That is far from the truth. For years I have 
collected examples of sexist language, and in order to produce this piece, I dipped into 
this collection, selected some of the choicest, and ordered them very carefully. 
"Translating" them into this alternate world was sometimes extremely difficult, and some 
words took weeks. The hardest terms of all, surprisingly enough, were "Niss", "Nrs.", and 
"Ns.", even though "Master" came immediately. The piece itself is not based on any 
particular article by William Safire, but Safire has without doubt been one of the most 
vocal opponents of nonsexist language reforms, and therefore merits being safired upon. 

Interestingly, Master Safire has recently spoken out on sexism in whis column 
(August 5, 1984). Lamenting the inaccuracy of writing either "Mrs. Ferraro" or "Miss 
Ferraro" to designate the Democratic vice-presidential candidate whose husband's name 
is "Zaccaro", whe writes: 

It breaks my heart to suggest this, but the time has come for Ms. We are no 
longer faced with a theory, but a condition. It is unacceptable for journalists to 
dictate to a candidate that she call herself Miss or else use her married name; 

A Person Paper on Purity in Language 


FIGURE 8-1. From a "Peggy Mills" comic strip, circa 1930. 

it is equally unacceptable for a candidate to demand that newspapers print a blatant 
inaccuracy by applying a married honorific to a maiden name. 

How disappointing it is when someone finally winds up doing the right thing but 
for the wrong reasons! In Safire's case, this shift was entirely for journalistic rather than 
humanistic reasons! It's as if Safire wished that women had never entered the political 
ring, so that the Grand Old Conventions of English-good enough for our grandfathers- 
would never have had to be challenged. How heartless of women! How heartbreaking the 
toll on our beautiful language! 

* * * 

A couple of weeks after I finished this piece, I ran into the book The Nonsexist 
Communicator, by Bobbye Sorrels. In it, there is a satire called "A Tale of Two Sexes", 
which is very interesting to compare with my "Person Paper". Whereas in mine, I slice 
the world orthogonally to the way it is actually sliced and then perform a mapping of 
worlds to establish a disorienting yet powerful new vision of our world, in hers, Ms. 
Sorrels simply reverses the two halves of our world as it is actually sliced. Her satire is 
therefore in some ways very much like mine, and in other ways extremely different. It 
should be read. 

I do not know too many publications that discuss sexist language in depth. The 
finest I have come across are the aforementioned Handbook of Nonsexist Writing, by 
Casey Miller and Kate Swift; Words and Women, by the same authors; Sexist Language: 
A Modern Philosophical Analysis, edited by Mary Vetterling-Braggin; The Nonsexist 
Communicator, by Bobbye Sorrels; and a very good journal titled Women and Language 
News, from which the cartoon 

A Person Paper on Purity in Language 


in Figure 8-1 was taken. Subscriptions are available at Centenary College of Louisiana, 
2911 Centenary Boulevard, Shreveport, Louisiana 71104. 

My feeling about nonsexist English is that it is like a foreign language that I am 
learning. I find that even after years of practice, I still have to translate sometimes from 
my native language, which is sexist English. I know of no human being who speaks 
Nonsexist as their native tongue. It will be very interesting to see if such people come to 
exist. If so, it will have taken a lot of work by a lot of people to reach that point. 

One final footnote: My book Godel, Escher, Bach, whose dialogues were the 
source of my very first trepidations about my own sexism, is now being translated into 
various languages, and to my delight, the Tortoise, a green-blooded male if ever there 
was one in English, is becoming Madame Tortue in French, Signorina Tartaruga in 
Italian, and so on. Full circle ahead! 

A Person Paper on Purity in Language 


Section III: 
Sparking and Slipping 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


Section 111: 
Sparking and Slipping 

The concern of the following five chapters is creativity: its wellsprings and its 
mechanizability. One of the most common metaphors for creativity is that of "spark": an 
electric leap of thought from one place to a remote one, without any apparent justification 
beforehand, but with all the justification in the world after the fact. Besides being used as 
a noun, "spark" is also used as a verb: one idea sparks another. Creative mental activity 
becomes, in this imagery, a set of sparks flying around in a space of concepts. Just how 
different is this metaphor for the mind from the reality of computers? They are filled with 
electricity rushing from one place to another at the most unimaginable speeds. Isn't that 
enough to turn the mechanical into the fluid? Or do computers still lack something 
ineffable? Are their mechanical attempts at thinking still too rigid, too dry? Is something 
liquid and slippery missing? My word for the elusive aspect of human thought still 
lacking in synthetic imitations is "slippability". Human thoughts have a way of slipping 
easily along certain conceptual dimensions into other thoughts, and resisting such 
slippage along other dimensions. A given idea has slightly different slippabilities- 
predispositions to slip-in each different human mind that, it comes to live in. Yet some 
minds' slippabilities seem to give rise to what we consider genuine creativity, while 
others' do not. What is this precious gift? Is there a formula to the creative act? Can spark 
and slippability be canned and bottled? In fact, isn't that just what a human brain is-an 
encapsulated creativity machine? Or is there more to creativity and mind than can ever be 
encapsulated in any finite physical object or mathematical model? 

Pattern, Poetry, and Power in the Music of Frederic Chopin 



Pattern, Poetry, and Power 
in the Music of Frederic Chopin 

April, 1982 

1 HE abstract visual pattern in Figure 9-1 is a graphical representation of the opening 
of one of the most difficult and lyrical pieces for piano eve: composed, namely the 
eleventh etude in Frederic Chopin's Opus 25, writer in about 1832, when he was in his 
early twenties. As a boy, I heard the Chopin etudes many times over on my parents' 
phonograph, and I quickly grew to love them. They became as familiar to me as the faces 
of my friends. Indeed, I cannot imagine who I would be if I did not know these pieces. 

A few years later, as a teen-ager who enjoyed playing piano, I wanted to learn to 
play some of these old friends. I went to the local music store and found a complete 
volume of them. I will never forget my reaction on opening the book and looking for my 
friends. They were nowhere to be found! I saw nothing but masses of black notes and 
chords: complex, awesome visual patterns that I had never imagined. It was as if, 
expecting to meet old friends, I had instead found their skeletons grinning at me. It was 
terrifying. I closed the book and left, somewhat in shock. 

I remember going back several times to that music store, each time pulled by the 
same curiosity tinged with fear. One day I worked up my courage and actually bought 
that book of etudes. I suppose I hoped that if I simply sat down at the piano and tried 
playing the notes I saw, I would hear my old friends, albeit a little slowly. Unfortunately, 
nothing of the kind happened. 11, general, I could not even play the two hands together 
comfortably, let alone recreate the sounds I knew so well. This left me disheartened and a 
little frightened at the realization of the awesome complexities I had taken for granted. 
You can look at it two ways. One way is to be amazed at how human perception can 
integrate a huge set of independent elements and "hear" only a single quality: the other is 
to be amazed at the incredible skill of a pianist who can play so many notes so quickly 
that they all blur into one shimmering mass, a "co-hear-ent" totality. 

At first it was bewildering to see that "friends" had anatomies of such 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


overwhelming complexity. But looking back, I don't know what I expected. Did I expect 
that a few simple chords could work the magic that I felt? No; if I had thought it over, I 
would have realized this was impossible. The only possible source of that magic was in 
some kind of complexity-patterned complexity, to be sure. And I think this experience 
taught me a lifelong lesson: that phenomena perceived to be magical are always the 
outcome of complex patterns of nonmagical activities taking place at a level below 
perception. More succinctly: The magic behind magic is pattern. The magic of life itself 
is a perfect example, emerging as it does out of patterned but lifeless activities at the 
molecular level. The magic of music emerges from complex, nonmagical-or should I say 
m<?tamagical?-patterns of notes. 

* * * 

Having bought this volume, I felt drawn to it, wanted to explore it somehow. I 
decided that, hard work though it might be, I would learn an etude. I chose the one that 
was my current favorite-the one pictured in Figure 9-1 -and set about memorizing the 
finger pattern in the right hand, together with the patterns that follow it, making up the 
first two pages or so. I played the pattern literally thousands of times, and gradually it 
became natural to my fingers, although never as natural as it had always sounded to my 
ears-or rather, to my mind. 

It was then that I first observed the amazing subtlety of the lightning flash of the 
right hand, how it is composed of two alternating and utterly different components: the 
odd-numbered notes (in red) trace out a perfect descending chromatic scale for four 
octaves, while the even-numbered notes (in black), wedged between them like pickets 
between the spaces in a picket fence, dictate an arpeggio with repeated notes. To execute 
this alternating pattern, the right hand flutters down the keyboard, tilting from side to side 
like a swift in flight, its wings beating alternately. 

A word of explanation. On a piano there are twelve notes (some black, some 
white) from any note to the corresponding note one octave away. Playing them all in 
order creates a chromatic scale, as contrasted with the more familiar diatonic scales 
(usually major or minor). These latter involve only seven notes apiece (the eighth note 
being the octave itself). The seven intervals between the successive notes of a diatonic 
scale are not all equal. Some are twice as large as others, yet to the ear there is a perfect 
intuitive logic to it. Rather paradoxically, in fact, most people can sing a major scale 
without any trouble, uneven intervals notwithstanding, but few can sing a chromatic scale 
accurately, even though it "ought" to be much more straightforward-or so it would seem, 
since all its intervals are exactly the same size. The chromatic scale is so called because 
the extra notes it introduces to fill up the gaps in a diatonic scale have a special kind of 
"bite" or sharpness to them that adds color or piquancy to a piece. For that reason, a piece 
filled with notes other than the seven notes belonging to the key it is in is said to be 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


FIGURE 9-2. The strikingly different visual textures of six Chopin Etudes. On top, Op. 
10, No. 11, in E-flat major; Op. 25, No. 1, in A flat major, and Op. 25, No. 2, in F minor. 
Below, Op. 25, No. 3, in F major; Op. 25, No. 6, in G- sharp minor; and Op. 25, No. 12, 
in C minor: [From the G. Schirmer (Friedheim) edition.] 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


An arpeggio is a broken chord played one or more times in a row, moving up or down the 
keyboard. Thus it bears a resemblance to a spread-out scale, a little like someone 
bounding up a staircase three or four steps at a time. Chopin's music is filled with both 
arpeggios and chromatic passages, but the intricate fusion of these two opposite structural 
elements in the eleventh etude struck me as a masterpiece of ingenuity. And what is 
amazing is how it is perceived when the piece moves quickly. The chromatic scale comes 
through loud and clear, forming a smooth "envelope" of the pattern (your eye picks it out 
too), but the arpeggio blurs into a kind of harmonic fog that deeply affects one's 
perception, if only subliminally, or so it seems at least to the untrained ear. 

Each etude in that book I bought has a characteristic appearance, a visual texture 
(see Figure 9-2). This was one of the most striking things about the book at first. I was 
not at all accustomed to the idea of written music as texture; the simple pieces I had 
played up to that time were slow, so that every note was distinctly heard. In other words, 
the pieces in my playing experience were coarse-grained compared with the fine grain of 
a Chopin etude, where notes often go by in a blur and are merely parts of an auditory 
gestalt. Conversion of this kind of auditory experience to notated music sheets often 
yields quite stunning textures and patterns. Each composer has a characteristic set of 
patterns the eye becomes familiar with, and these etudes provided for me a stunning 
realization of that fact. 

* * * 

Sadly, I was forced to abandon etude Op. 25, No. 11, after having learned only a 
little more than a page-it was simply too hard for me. James Huneker, an American critic 
and one of Chopin's earliest English-language biographers, wrote of this study: "Small- 
souled men, no matter how agile their fingers, should not attempt it." Well, whatever the 
size of my soul, my fingers were not agile enough. For a while, that discouraged me from 
attacking any more Chopin etudes at all. A few years later, though, when I was working 
more earnestly on improving my modest piano skills, I came across an isolated Chopin 
etude in a book of medium-difficult selections. It turned out to be one of three etudes he 
had composed later in life, none of which had been on my parents' records. This was a 
real find! Luckily its texture looked less prickly, its pace less forbidding. Somewhat 
gingerly, I played through it very slowly and discovered that it was astonishingly 
beautiful and not as inaccessible as the others I'd tried. 

Like all the rest of Chopin's studies, this one is centered on a particular technical 
point, although to think of the etudes primarily in that way is like thinking of the fantastic 
gymnastic performances of Nadia Comaneci as merely fancy fitness exercises. Louis 
Ehlert, a nineteenth-century musicologist, wrote of one of the most beautiful etudes in 
Opus 25 (the sixth one, in G-sharp minor): "Chopin not only versifies an exercise in 
thirds; he transforms it into such a work of art that in studying it one could sooner 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


fancy oneself on Parnassus than at a lesson. He deprives every passage of all mechanical 
appearance by promoting it to become the embodiment of a beautiful thought, which in 
turn finds graceful expression in its motion." Similar words apply to this easier, 
posthumously published etude in A-flat major, whose chief technical concern is the 
concept of three against two, a special case of the general concept of polyrhythm. 

Mathematically, the concept is simple enough: play two musical lines 
simultaneously, one of them sounding three notes to the other's two. Usually the triplet 
and doublet are aligned so that they start at the same instant. When they are both plotted 
on a unit interval (see Figure 9-3a), you can see that the doublet's second note is struck 
halfway between the triplet's second and third notes. Of course, this is simply a pictorial 
representation of the fact that 1/2 is the arithmetic mean of 1/3 and 2/3. 

In theory, two voices playing a three-against-two pattern need not be perfectly 
aligned. If you shift the upper voice by, say, 1/12 to the right, you get a different picture 
(see Figure 9-3b). Here the triplet's third note starts halfway through the doublet's second. 
As you can see, the triplet extends beyond the end of the interval, presumably to join onto 
another identical 

FIGURE 9-3. The 3-against-2 phenomenon. In (a), as it is usually heard, with both voices 
"in phase" . In (b), one voice is shifted by 1112 with respect to the other, producing a 
quite unusual pattern of beats. In (c), it is shown how in principle the relative staggering 
of the two voices could be adjusted continuously by a knob arrangement. 

Voice 1 : j 

(a) j 

Voice 2: j 

1 1 1 | 1 M | 1 1 


| (triplet) 
| (doublet) 

) 1/3 2/3 


I ' 1 ' 

) 1/2 

— i 

1 1 1 1 1 | 1 1 1 1 





1/13 5/12 9/12 13/12 

(b) I— I 1— | 1 1 

1/2 1 

Vak** | | | | | | | | | | | | | 

(c) 9/12 


Pattern, Poetry, and Power in the Music of Frederic Chopin 


pattern. We can fold the pattern around and represent its periodicity in a circle, as is 
shown in Figure 9-3c. By rotating either of the concentric circles like a knob, we get all 
possible ways of hearing three beats against two. In Chopin and most other Western 
music, however, the only possibility that I have seen explored is where the triplet and 

doublet are perfectly "in phase". 
At first I found the three-against-two rhythm hard to perform exactly. One has to learn 
how to hear the voices separately, to hear the roundish lilt of the three -rhythm weaving 
itself into the square mesh of the two-rhythm. Of course, it's easy to hear when someone 
else is playing; the trick is to hear it in one's own playing! In principle the task is not 
hard, but it is one of coordination, and requires practice. I found that once I had mastered 
the problem of playing the two rhythms evenly and independently, I could play the whole 
etude. To play it-or to hear it-is like smiling through tears, it is so beautiful and sad at the 
same time. 

It is impossible to pinpoint the source of the beauty, needless to say, but it is certainly due 
in part to the way the chords in the right hand flow into one another. (See Figure 9-4.) 
Almost all the way through the piece, the 

FIGURE 9-4. The opening two measures of the posthumous etude in Afat major, showing 
its typical 3-against-2 pattern with slowly shifting chords in the right hand. [Music 
printed by Donald Byrd's SMUT program at Indiana University.] 

right hand plays three-note chords (six to a measure) against single notes by the left hand 
(four to a measure). The delicacy of the piece comes from the fact that very often, when 
one chord flows into the next one, only a single note changes. And to add to the subtlety 
of this slowly shifting sound-pattern, usually the steps taken by the shifting voice are 
single scale-steps rather than wide jumps. These "rules" do not hold all the way, of 
course; there are numerous exceptions. Nevertheless, there is a uniform aural texture to 
the piece that imbues it with its soft melancholy, known in Polish as tgsknota. 

It is interesting to speculate about the extent to which such formal considerations 
occurred to Chopin while he was composing. It is well known that Chopin revered Bach's 
music. "Always play Bach" was his advice to a 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


FIGURE 9-5. Chopin's Etude in C major from Opus 10, his first etude, computer-printer 
as to reproduce as closely as possible the stunning usual pattern that Chopin himself 
caret produced in his manuscript. Aside from the beautiful alignment of crests and 
troughs, Chop manuscript features whole notes centered in their measures (in the bass 
clef). [. 'Music printer Donald Bird's SMUT program at Indiana University'.] 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


pupil, and he was particularly devoted to the Well-Tempered Clavier, a paragon of 
elegant formal structures. Chopin confided to his friend Eugene Delacroix, the painter, 
that "The fugue is like pure logic in music ... To know the fugue deeply is to be 
acquainted with the element of all reason and consistency in music." Clearly, Chopin 
loved pattern. 

A stunning demonstration of Chopin's extreme awareness of the visual appeal of the 
textures in his etudes is provided by the appearance of the manuscript of his etude Op. 10, 
No. 1, in C major, one about which James Huneker wrote, in his inimitable prose: 

The irregular black ascending and descending staircases of notes strike the 
neophyte with terror. Like Piranesi's marvellous aerial architectural dreams, these 
dizzy acclivities and descents of Chopin exercise a charm, hypnotic, if you will, 
for eye as well as ear. Here is the new technique in all its nakedness, new in the 
sense of figure, design, pattern, web, new in a harmonic way. The old order was 
horrified at the modulatory harshness; the young sprigs of the new, fascinated and 
a little frightened. A man who could thus explode a mine that assailed the stars 
must be reckoned with. 

That "terror-stricken neophyte" might well have been me. Huneker's words form 
an amusing contrast with what the nineteen-year-old Chopin himself wrote of this, his 
first etude, in a letter to his friend Tytus Woyciechowski in 1829: "I have written a large 
exercise in form, in my own personal style; when we get together, I'll show it to you." A 
finished copy, believed to be in Chopin's hand, is now in the Museum of the Frederic 
Chopin Society in Warsaw. With the present turmoil in Poland, it would be difficult to 
gain permission to reproduce it directly. Fortunately, a long-standing research project of 
my friend Donald Byrd at Indiana University has been to develop a computer program 
that can print out music according to specification, and at professional standards. With 
some help from our friend Adrienne Gnidec, Don and I coaxed his marvelous program 
into printing the music in a very strange and visually striking way (see Figure 9-5). This 
figure reproduces quite accurately the large-scale visual patterns of Chopin's own 
manuscript, in which Chopin took great care to align all the crests of the massive waves. 
When this piece is played at the proper speed, each sweep up and down the keyboard is 
heard as one powerful surge, like the stroke of an eagle's wing, with the notes of each 
crest sparkling brilliantly like wingtips flashing in the sun. 

Another interesting feature of Chopin's notation, here copied, is his positioning of the 
doubled whole notes in the bass. Instead of placing them at the very start of each 
measure, aligned with the sixteenth-note rests, Chopin centered each one in its own 
measure, thereby creating an elegant visual balance, though losing some notational 
clarity. Musically, such centering has no effect. Since a whole note lasts for the duration 
of an entire 4/4 measure, it must be struck at the start of the measure, otherwise it would 
overflow into the next measure, and that is impossible. (Or rather, it would 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


FIGURE 9- 1 , The opening bars of the right hand of Etude Op. 1 0, No. I, by Fridenc Chopin, 
represented graphically. Underneath it and aligned with it is the conventional notation. [Com- 
puter graphics Ay Donald Byrd and the author. Music notation printed by the SMUT music- 
printing system, developed by Donald Byrd at Indiana University. ] 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


FIGURE 9-7. An intricate 2-aga\n$t-3 rhythm in Chopin's Waltz, Op. 42 y in Aflat major. 
Although there are six notes in the right hand of each measure, only two of them (printed with 
stems up} belong to the mam melody. They beat against the oom -pah-pah of the left hand. [Musk 
printed by Donald Byrd's SMUT program at Indiana University. ] 

FIGURE 9-8 . One of the most complex examples of polyrhythm mall of Chopin s output. These 
two measures from his F-minor Ballade involve 3-agamst~2 (on a local scale) as well as 
3-agatnst- 8 (on a more global scale> involving the notes wtth flags flying upwards). [Musk 
printed by Donald Byrd's SMI \'T program at Indiana University. ) 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


violate a much more rigid convention of music notation-namely, that n note can 
designate a sound that overflows the boundaries of its measure. Hence the only possible 
interpretation is that the whole note is to be struck at the outset. In other words, the 
centering is simply a charming artistic touch with a quaint nineteenth-century flavor, like 
the ornaments on Victorian house. The modern music-reading eye is used to more 
function notation; in particular, it expects the staff to be in essence a graph of the sound, 
in which the horizontal axis is time. Thus notes struck simultaneously are expected to line 
up vertically. 

But let us return to the matter of Chopin's preoccupation with form an structure. 
Few composers of the romantic era have penned such visual] patterned pages, have spun 
a whole cloth out of a single textural idea. Wit Chopin, though, preoccupation with strict 
pattern never took precedent over the expression of heartfelt emotions. One must 
distinguish, it seers to me, between "head pattern" and "heart pattern", or, in moi 
objective- sounding terms, between syntactic pattern and semantic patters The notion of a 
syntactic pattern in music corresponds to the form; structural devices used in poetry: 
alliteration, rhyme, meter, repetition < sounds, and so on. The notion of a semantic 
pattern is analogous to the pattern or logic that underlies a poem and gives it reason to 
exist: the inspiration, in short. 

That there are such semantic patterns in music is as undeniable as the; there are 
courses in the theory of harmony. Yet harmony theory has no more succeeded in 
explaining such patterns than any set of rules has yet succeeded in capturing the essence 
of artistic creativity. To be sure, they are words to describe well-formed patterns and 
progressions, but no theory yet invented has even come close to creating a semantic sieve 
so fine as t let all bad compositions fall through and to retain all good ones. Theories of 
musical quality are still descriptive and not generative; to some extent they can explain in 
hindsight why a piece seems good, but they are not sufficient to allow someone to create 
new pieces of quality and interest, is nonetheless fascinating, if not downright 
compelling, to try to find certain earmarks of greatness, to try to understand why it is that 
one composer music can reach in and touch your innermost core while another composer 
music leaves you cold and unmoved. It is a mystery. 

* * * 

After learning the posthumous A-flat etude, I felt encouraged to tack some of the others. 
One of the ones I had loved the most was Op. 25, N 2, in F minor. To me, it was a soft, 
rushing whisper of notes, a fluttering like the leaves of a quaking aspen in a gentle 
breeze. Yet it was not just scene of nature; it expressed a human longing, a melancholy 
infused with strange and wild yearnings for something unknown and remote- tgskno ta 
again. I knew this melody inside out from many years of hearing it, and looked forward 
to transferring it to my fingers. 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


After a couple of months' practice, my fingers had built up enough stamina to play the 
piece fairly evenly and softly. This was very satisfying to me until one day, an 
acquaintance for whom I was playing it commented, "But you're playing it in twos-it's 
supposed to be in threesl" What she meant by this was that I was stressing every second 
note, rather than every third. Bewildered, I looked at the score, and of course, as she had 
pointed out, the melody was written in triplets. But surely Chopin had not meant it to be 
played in threes. After all, I knew the melody perfectly! Or did I? I tried playing it in 
threes. It sounded strange and unfamiliar, a perceptual distortion the like of which I had 
never experienced. 

I went home and took out my parents' old Remington LP of the Chopin etudes 
Opus 25 (played by a wonderful but hardly remembered pianist named Alexander 
Jenner). I put on the F minor etude and tried to hear which way he played it. I found I 
could hear it either way. Jenner had played it so smoothly, so free of accent (as they say 
Chopin did, by the way), that one really could not tell which way to hear it. All of a 
sudden I saw that I really knew two melodies composed of the exact same sequence of 
notes ! I felt myself to be very fortunate, because now I could experience this familiar old 
melody in a fresh new way. It was like falling in love with the same person twice. 

I had to practice hard to undo the bad habits of "biplicity" and to replace them 
with the indicated "triplicity", but it was a delight. The hardest part, however, was 
combining the two hands. With duplets in the right hand, this had presented no problem; 
all the accented notes fell in coincidence with notes in the left hand, moving at exactly 
half the speed of the right hand in a pattern of wide arpeggios. But if I were to spread my 
accents thinner, so that I accented only every third note of the right hand, then many of 
the notes in the left hand would be struck simultaneously with weak notes in the right. 
This may sound simple enough, but I found it very tricky. The difference is shown in 
Figure 9-6 (which, like most of the others in this article, was created by Dori Byrd's 

FIGURE 9-6. The opening of Etude Op. 25, No. 2, printed in two ways. In (a), as Chopin 
penned it, and as it is usually conceived: in threes. In (b), as I first heard it and first 
learned to play it: in twos. [Music printed by Donald Byrd's SMUT program at Indiana 
University. ] 


3 3 3 r» 


p mo t co i opato 

^ J, , ^ Pr 

_:*__ _*>- : 

- p F^r 1 


t 3 

3 3 

if* » r =&d%s cftr'ir i 

p motto iogmtQ 

Iwt » t f r V f r [ r 

3 3 

3 3 



Pattern, Poetry, and Power in the Music of Frederic Chopin 


Even after mastering the right-hand solo in triplets, I found that 1 put the parts 
together, it was at first nearly impossible to keep from accenting the melodic notes 
coinciding with the bass. It was a fearsome of coordination, yet I enjoyed it greatly. After 
a while something "snapped into place", and I found I was doing it. It was not something 
could consciously control or explain; I simply was playing it right, a sudden. Huneker, in 
his commentary on this etude, quotes Theodor Kullak another Chopin specialist, about 
the "algebraic character o tone-language" and then adds his own image: "At times so 
delicate design that it recalls the faint fantastic tracery made by frost on gla 

Chopin's music is filled to the brim with such "algebraic" tri( cross-rhythm. He 
seemed to revel in them in a way that no pre composer ever had. A famous example is his 
iconoclastic waltz, Opus A-flat major, written in 1840. In this waltz, the bass line follows 
the "oom-pah-pah" convention, but the melody of the first section comp counters this 
three-ness; its six eighth-notes, instead of being broken into three pairs aligned with the 
left hand's bounces, form two triplets, as in minor etude just discussed (see Figure 9-7). 
Here, though, in contrast I nearly accentless shimmering desired in that etude, the initial 
not successive triplets are to be clearly emphasized and prolonged, thus creating a higher- 
level melody (shown in red) abstracted out of the quietly rip right hand. This melody is 
composed of two notes per measure, be regularly against the three notes of the waltzing 
bass. It is a marvelous trompe-l 'oreille effect, one that Chopin exploited again in his E 
major sch Opus 54, written in 1842, when he was 32. 

* * * 

In that same year, Chopin wrote what some admirers consider to b greatest work: 
the fourth Ballade, in F minor. This piece is filled noteworthy passages, but one in 
particular had a profound effect or One day, long after I knew the piece intimately from 
recordings, a fi told me that he had been practicing it and wanted to show me "a bit of t 
polyrhythm" that was particularly interesting. I was actually not interested in hearing 
about polyrhythm at the moment, and so I didn't' pay much attention when he sat down at 
the keyboard. Then he started to He played just two measures, but by the time they were 
over, I felt someone had reached into the very center of my skull and caused some to 
explode deep down inside. This "bit of tricky polyrhythm" had un, me completely. What 
in the world was going on? 

Of course, it was much more than just polyrhythm, but that is part As you can see 
in our three-color plot of the two measures concerned (Figure 9-8), the left hand forms 
large, rumbling waves of sound, like ocean waves on which a ship is sailing. Each wave 
consists of six n forming a rising and falling arpeggio (in blue). High above these billows 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


sound, a lyrical melody (in red) soars and floats, emerging out of a blur of notes swirling 
around it like a halo (in black). This high melody and its halo are actually fused together 
in the right hand's eighteen notes per measure. They are written as six groups of three, so 
that in each half-measure, nine high notes beat against the six-note ocean wave below- 
already a clear problem in three-against-two. But look: on top of those flying triplets, 
there are eight-note flags placed on every fourth note! Thus there is a flag on the first 
note of the first triplet, on the second note of the second triplet, on the third note of the 
third triplet, on the fourth note of the fourth triplet ... Well, that cannot be. In fact, the 
fourth triplet has no flag at all; the flag goes to the first note of the fifth, triplet, and the 
pattern resumes. Flags waving in wind, high on the masts of a sea-borne sailing ship. 

This wonderfully subtle rhythmic construction might just might-have been 
invented by anyone, say by a rhythm specialist with no feeling for melody. And yet it was 
not. It was invented by a composer with a supreme gift for melody and harmony as well 
as for rhythm, and this can be no coincidence. A mere "rhythms hacker" would not have 
the sense to know what to do with this particular rhythm any more than with any other 
rhythmic structure. There is something about this passage that shows true genius, but 
words alone cannot define it. You have to hear it. It is a burning lyricism, having a power 
and intensity that defy description. 

One must wonder about the soul of a man who at age 32 could write such 
possessed music-a man who at the tender age of nineteen could write such perfectly 
controlled and poetic outbursts as the etudes of Opus 10. Where could this rare 
combination of power, poetry, and pattern, this musical self-confidence and maturity, 
have come from? 

* * * 

In search of an answer, one must look to Chopin's roots, both his family roots and his 
roots in his native land, Poland. Chopin was born in a small and peaceful country village 
30 miles west of Warsaw called Zelazowa Wola, which means Iron Will. His father, 
Nicolas (Mikolaj) Chopin, was French by birth but emigrated to Poland and became an 
ardent Polish patriot (so ardent, in fact, that he participated in the celebrated but ill-fated 
insurrection led by the national hero Jan Kilinski in 1794 against the Russian occupation 
of Warsaw). Chopin's mother, Justyna Krzyzanowska, was a distant relative of the rich 
and aristocratic Skarbek family, who lived in Zelazowa Wola. She lived with them as a 
family member and took care of various domestic matters. When Mikolaj Chopin came to 
be the tutor of the Skarbek children, he and Justyna met and married. In addition to being 
a gentle and loving mother, she was as fervent a Polish patriot as her husband, and had a 
romantic and dreamy streak. They had four children, of whom Frederic, born in 1810, 
was the second. The other three children were girls, one of whom died young, of 
tuberculosis-a disease that in the end would 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


claim Frederic as well, at age 39. The four children doted on one another It was a close- 
knit family, and all in all, Chopin had a very happy childhood( 

The family moved to Warsaw when Frederic was very young, and they he was 
exposed to culture of all kinds, since his father was a teacher an knew university people 
of all disciplines. Frederic was a fun-loving an spirited boy. The summer he was fourteen 
he spent away from home in lilac-filled village called Szafarnia. He wrote home a series 
of letters gleefull ,hocking the style of the Warsaw Courier, a gossipy provincial paper of 
th times. One item from his "Szafarnia Courier" ran as follows (in full): 

The Esteemed Mr. Pichon [an anagram of "Chopin"] was in Golub on the 26th of 
the current month. Among other foreign wonders and oddities, he came: across a 
foreign Pig, which Pig quite specially attracted the attention of this most 
distinguished Voyageur. 

Chopin's musical talent, something he shared with his mother, emerge( very early and 
was nurtured by two excellent piano teachers, first by a gentle and good-humored old 
Czech named Wojciech Zywny, and later by the director of the Warsaw Conservatory, 
Jozef Eisner. 

Chopin grew up in the capital city of the "Grand Duchy of Warsaw "what little 
remained of Poland after it had been decimated, in three successive "partitions" in the late 
eighteenth century, by its greed) neighbors: Russia, Prussia, and Austria. The turn of the 
century was marked by a mounting nationalistic fervor; in Warsaw and Cracow, the two 
main Polish cities, there occurred a series of rebellions against the foreign occupiers, but 
to no avail. A number of ardent Polish nationalists went abroad and formed "Polish 
Legions" whose purpose was to fight for the liberation of all oppressed peoples and to 
eventually return to Poland and reclaim it from the occupying powers. When Napoleon 
invaded Russia in 1806, a Polish state was established for a brief shining instant; then all 
was lost again. The Polish nation's flame flickered and nearly went out totally, but as the 
words to the Polish national anthem proclaim, "Jeszcze Polska nit zginela, poki my 
zyjemy." It is a curious sentence, built out of past and present tenses, and literally 
translated it runs: "Poland has not yet perished, as long as we live." The first clause 
sounds so fatalistic, as if to admit that Poland surely will someday perish, but not quite 
yet! Some Poles tell me that the connotations are not that despairing, that a better overall 
translation would be, "Poland will not perish, as long as we live." Others, though, tell me 
that the construction is subtly ambiguous, that its meaning floats somewhere between 
grim fatalism and ardent determination. 

* * * 

The Poles are a people who have learned to distinguish sharply between two conceptions 
of Poland: Poland the abstract social entity, at whose core 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


are the Polish language and culture, and Poland the concrete geographical entity, the land 
that Poles live in. Narod polski-the "Polish nation"represents a spirit rather than a piece 
of territory, although of course the nation came into existence because of the bonds 
between people who lived in a certain region. It is the fragility of this flickering flame, 
and the determination to keep it alive, that Chopin's music reflects so purely and 
poignantly. There is a certain fusion of bitterness, anger, and sadness called zal that is 
uniquely Polish. One hears it, to be sure, in the famous mazurkas and polonaises, pieces 
that Chopin composed in the form of national dances. The mazurkas are mostly smaller 
pieces based on folk-like tunes with a lilting 3/4 rhythm; the polonaises are grand, heroic, 
and martial in spirit. But one hears this burning flame of Poland just as much in many of 
Chopin's other pieces-for example, in the slow middle sections of such pieces as the 
waltzes in A minor (Op. 34, No. 2) and A-flat major (Op. 64, No. 3), the pathos-filled 
Prelude in F-sharp major (Op. 28, No. 13), and particularly in the middle part of the F- 
sharp minor Polonaise (Opus 44), where a ray of hope bursts through dark visions like a 
gleam in the gloom. One hears zal in the angry, buzzing harmonies of the etude in C- 
sharp minor (Op. 10, No. 4) and in the passion of the etude in E major (Op. 10, No. 3). In 
fact, Chopin is said to have cried out once, on hearing this piece played in his presence, 
"O ma patriel" ("0 my homeland!"). 

But aside from the fervent patriotism of Chopin's music there is in it that different 
and softer kind of Polish nostalgia: tcsknota. It is his yearning for home-for his childhood 
home, for his family, for a dream-Poland that at age twenty he had left forever. In 1830, 
at the height of the turmoil in Warsaw, Chopin set out for France. He had a premonition 
that he would never return. Traveling by way of Vienna, he made slow progress. When 
things boiled over in late 1831-when, in September 1831, the Russians finally crushed the 
desperate Warsaw insurrection-Chopin was in Stuttgart. On hearing the news, he was 
overwhelmed with agitation and grief, partly out of fear for the fate of his family, partly 
out of love for his stricken homeland. He wavered about going back to Poland and 
fighting for his nation, but the idea eventually receded from his mind. 
It was at about this time that he composed the twelfth and final etude of his Opus 10. Of 
this etude, Chopin's Polish biographer Maurycy Karasowski wrote: 

Grief, anxiety, and despair over the fate of his relatives and his dearly beloved 
father filled the measure of his sufferings. Under the influence of this mood he 
wrote the C minor etude, called by many the "Revolutionary Etude". Out of the 
mad and tempestuous storm of passages for the left hand the melody rises aloft, 
now passionate and anon proudly majestic, until thrills of awe stream over the 
listener, and the image is evoked of Zeus hurling thunderbolts at the world. 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


This is pretty strong language. Huneker echoes these sentiments, as doe the French 
pianist Alfred Cortot, who in his famous Student's Edition of th etudes refers to the piece 
as "an exalted outcry of revolt .... wherein the emotions of a whole race of people are 
alive and throbbing." I myself hay never found this etude as overwhelming as these 
authors do, although it i unquestionably a powerful outburst of emotion. If someone had 
told m that one of the etudes had come to be known as the "Revolutionary Etude and had 
asked me to guess which one, I would certainly have picked one c the last two of Opus 
25, either No. 11 in A minor, the one pictured at th beginning of this article, with its 
tumultuous cascades of notes in the right hand against the surging, heroic melody in the 
left hand, or else No. 12 i C minor, which sounds to me like a glowing inferno seen at 
night from fa away, flaring up unpredictably and awesomely. As for the actual 
"Revolutionary Etude", I have always found its ending enigmatic fluctuating as it does 
between major and minor, between the keys of F an C, like an indecisive thunderclap. 

Still, this piece, like the martial A-flat major Polonaise (Opus 53), ha become a 
symbol of the tragic yet heroic Polish fate. Wherever an whenever it is played, it is 
special to Poles; their hearts beat faster, and their spirits cannot fail to be deeply moved. I 
will never forget how I heard i nightly as the clarion call of Poland, when, from a small 
town in German in 1975, I would try to tune in Radio Warsaw. Two measures of shrill 
rousing chords above a roaring left hand, like a call to arms, were repeate4 over and over 
again as the call signal, preceding a nightly broadcast c Chopin's music. Nor will I ever 
forget how that feeble signal of Radio Warsaw faded in and out, symbolizing to me the 
flickering flame of Poland's spirit. 

* * * 

However one chooses to describe it-whether in terms of zal and tesknota or patriotyzm 
and polyrhythm, or chromaticism and arpeggios-Chopin' music has had a deep influence 
on the composers of succeeding generations. It is perhaps most visible in the piano music 
of Alexander Scriabin, Sergei Rachmaninoff, Gabriel Faure, Felix Mendelssohn, Robert 
and Clara Schumann, Johannes Brahms, Maurice Ravel, and Claude Debussy, but 
Chopin's influence is far more pervasive than even that would suggest. It has become one 
of the central pillars of Western music, and a such it has its effect on the music perceived 
and created by everyone in the Western world. 

In one way, Chopin's music is purely Polish, and that Polishness polsknose-extends even 
to foreign-inspired pieces such as his Bolero Tarantella, Barcarolle, and so on. In another 
way, though, Chopin's music 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


is universal, so that even his most deeply Polish pieces-the mazurkas and polonaises- 
speak to a common set of emotions in everyone. But what are these emotions? How are 
they so deeply evoked by mere pattern? What is the secret magic of Chopin? I know of 
no more burning question. 

Post Scriptum. 

This column is a unique one, in that it expresses certain kinds of emotions that are not 
expressed as directly in my other published writings. But the part of me represented by it 
is no smaller and no less important than the part of me from which my other writings 
flow. It was provoked, of course, 

by the worsening crisis in Poland in late 1981, just at the time of the takeover by the 
military and the tragic collapse of Solidarity. In fact, it was almost exactly 150 years after 
the tragic takeover of Warsaw by the Russians that triggered the Revolutionary Etude. I 
guess Poland has not yet perishedbut it is certainly going through terrible tribulations, 
once again. 

I received some heart-warming correspondence in response to this column. One letter, 
from Andrzej Krasinski, a Pole living in West Germany, ran this way: 

I just read your nice article about Chopin's music in the April issue of Scientific 
American in which you have shown so much sympathy and understanding for a 
Polish soul, and so much care for the Polish language. I enjoyed it a lot, although I 
am no expert in music. However, by my birth, I happen to be an expert in the Polish 
language, and I wish to point out a minor error you have made. The name of the 
village where Chopin was born, Zelazowa Wola, does not mean "Iron Will", 
although you might have picked such a meaning by looking for the two words in a 
dictionary separately. The word wola, which means "will" alone, when applied as a 
part of a village's name means that the village was founded by somebody's will, and 
then the other part of the village's name usually stems from a person's name. There 
are numerous examples of such names in Poland, and normally they are attached to 
small hamlets. Consequently, Wola as a village's name has a second meaning in 
Polish, and that is simply "small village". The word Zelazowa does not seem to 
stem from a person's name (although I have no literature here to answer that 
question with certainty). It suggests that the founding of the village had something 
to do either with iron ore being found somewhere in the neighborhood or with iron 
being processed there. So the best translation of Zelazowa Wola would be "Iron 
Village" or "Iron-Ore Village". "Iron will" in Polish would be Zelazna Wola, and 
the name of Chopin's village does quite certainly not mean that. 

I stand corrected ! 

Pattern, Poetry, and Power in the Music of Frederic Chopin 


Jakub Tatarkiewicz, a physicist writing from Warsaw, very gently pointed out that 
I had somehow managed to invent a new Polish word: polsknosc. I was quite surprised to 
learn that I had invented it, since I was sure I had sees it somewhere, but as it tu rns out, 
what I had actually seen was polskose (will no n'). Tatarkiewicz complimented me, 
however, for my talent in coming up with a good neologism, for, he said, my word has 
poignant overtones o such loaded words as tfsknota and Solidarnosc. As he put it: "I can 
only doubt if you really meant all those connotations-or is it just Chopin's music that 
played in your soul?!" I don't know. I guess I'd chalk it up to serendipity 

Great art has a way of evoking continual commentary; it is a bottomless source of 
inspiration to others. I have my blind spots in terms o understanding music, that's for 
sure; but Chopin hits some kind of bull's-eyl in my soul. If I could meet any one person 
from the past, it would be Chopin without any doubt. What saddens me enormously is his 
relatively small output. He died at age 39, with his expressive powers clearly as strong a; 
ever. What ever would he have produced, had he lived to the age of, say, 65 as Bach did? 
Unbelievable firegems, I am sure. Indeed, I cannot imagine who I would be if I knew 
those pieces. 

Pattern, Poetry, and Power in the Music of Frederic Chopin 



Parquet Deformations: 
A Subtle, Intricate Art Form 

July, 1983 

VV HATS the difference between music and visual art? If I were asked this, I would 
have no hesitation in replying. To me, the major difference is clearly temporality. Works 
of music intrinsically involve time; works of art do not. More precisely, pieces of music 
consist of sounds intended to be played and heard in a specific order and at a specific 
speed. Music is thus fundamentally one-dimensional; it is tied to the rhythms of our 
existence. Works of visual art, by contrast, are generally two-dimensional or three- 
dimensional. Paintings and sculptures seldom have any intrinsic "scanning order" built 
into them that the eye must follow. Mobiles and other pieces of kinetic art may change 
over time, but often without any specific initial state or final state or intermediate stages. 
You are free to come and go as you please. 

There are exceptions to this generalization, of course. European art has its grand 
friezes and historic cycloramas, and Oriental art has intricate pastoral scrolls of up to 
hundreds of feet in length. These types of visual art impose a temporal order and speed 
on the scanning eye. There is a starting point and a final point. Usually, as in stories, 
these points represent states of relative calm-especially the end. In between them, various 
types of tension are built up and resolved in an idiosyncratic but pleasing visual rhythm. 
The calmer end states are usually orderly and visually simple, while the tenser 
intermediate states are usually more chaotic and visually confusing. If you replace 
"visual" by "aural", virtually the same could be said of music. 

I have been fascinated for many years by the idea of trying to capture the essence 
of the musical experience in visual form. I have my own ideas as to how this can be done; 
in fact, I spent, several years working out a form of visual music. It is perhaps the most 
original and creative thing I have ever done. However, by no means do I feel that there is 
a unique or best way to carry out this task of "translation", and indeed I have often 
wondered how 

Parquet Deformations: A Subtle, Intricate Art Form 


others might attempt to do it. I have seen a few such attempts, but most of them, 
unfortunately, did not grab me. One striking counterexample is the set of "parquet 
deformations" meta-composed by William Huff, a professor of architectural design at the 
State University of New York at Buffalo. 

I say "meta-composed" for a very good reason. Huff himself has never executed a 
single parquet deformation. He has elicited hundreds of them, however, from his 
students, and in so doing has brought this form of art to a high degree of refinement. Huff 
might be likened to the conductor of a fine orchestra, who of course makes no sound 
whatsoever during a performance. And yet we tend to give the conductor most of the 
credit for the quality of the sound. We can only guess how much preparation and 
coaching went into this performance. And what about the selection of the pieces and 
tempos and styles-not to mention the many-year process of culling the performers 

So it is with William Huff. For 23 years, his students at Carnegie-Mellon and 
SUNY at Buffalo have been prodded into flights of artistic inspiration, and it is thanks to 
Huffs vision of what constitutes quality that some very beautiful results have emerged. 
Not only has he elicited outstanding work from students, he has also carefully selected 
what he feels to be the best pieces and these he is preserving in archives. For these 
reasons, I shall at times refer to Huffs "creations", but it is always in this more indirect 
sense of "meta-creations" that I shall mean it. 

Not to take credit from the students who executed the individual pieces, there is a 
larger sense of the term "credit" that goes exclusively to Huff, the person who has shaped 
this whole art form himself. Let me use an analogy. Gazelles are marvelous beasts, yet it 
is not they themselves but the selective pressures of evolution that are responsible for 
their species' unique and wondrous qualities. Huffs judgments and comments have here 
played the role of those impersonal evolutionary selective pressures, and out of them has 
been molded a living and dynamic tradition, a "species" of art exemplified and extended 
by each new instance. 

* * * 

All that remains to be said by way of introduction is the meaning of the term 
parquet deformation. It is nearly self-explanatory, actually: traditionally, a parquet is a 
regular mosaic made out of inlaid wood, on the floor of an elegant room; and a 
deformation-well, it's somewhere in between a distortion and a transformation. Huffs 
parquets are more abstract: they are regular tessellations (or tilings) of the plane, ideally 
drawn with zero-thickness line segments and curves. The deformations are not arbitrary 
but must satisfy two basic requirements: 

(1) There shall be change only in one dimension, so that one can see a temporal 
progression in which one tessellation gradually becomes another; 

Parquet Deformations: A Subtle, Intricate Art Form 


(2) At each stage, the pattern must constitute a regular tessellation of the plane 
(i.e., there must be a unit cell that could combine with itself so as to cover an infinite 
plane exactly). 

(Actually, the second requirement is not usually adhered to strictly. It would be 
more accurate to say that the unit cell at any stage of a parquet deformation can be easily 
modified so as to allow it to tile the plane perfectly.) 

From this very simple idea emerge some stunningly beautiful creations. Huff 
explains that he was originally inspired, back in 1960, by the woodcut "Day and Night" 
of M. C. Escher. In that work, forms of birds tiling the plane are gradually distorted (as 
the eye scans downwards) until they become diamond- shaped, looking like the 
checkerboard pattern of cultivated fields seen from the air. Escher is now famous for his 
tessellations, both pure and distorted, as well as for other hauntingly strange visual games 
he played with art and reality. 

Whereas Escher's tessellations almost always involve animals, Huff decided to 
limit his scope to purely geometric forms. In a way, this is like a decision by a composer 
to use austere musical patterns and to totally eschew anything that might conjure up a 
"program" (that is, some sort of image or story behind the sounds). An effect of this 
decision is that the beauty and visual interest must come entirely from the complexity and 
the subtlety of the interplay of abstract forms. There is nothing to "charm" the eye, as 
with pictures of animals. There is only the uninterpreted, unembellished perceptual 

Because of the linearity of this form of art, Huff has likened it to visual music. He 


Though I am spectacularly ignorant of music, tone deaf, and hated those piano 
lessons (yet can be enthralled by Bach, Vivaldi, or Debussy), I have the students 
'read' their designs as I suppose a musician might scan a work: the themes, the 
events, the intervals, the number of steps from one event to another, the rhythms, 
the repetitions (which can be destructive, if not totally controlled, as well as 
reinforcing). These are principally temporal, not spatial, compositions (though all 
predominantly temporal compositions have, of necessity, an element of the spatial 
and vice versa-e.g., the single-frame picture is the basic element of the moving 

* * * 

What are the basic elements of a parquet deformation? First of all, there is the 
class of allowed parquets. On this, Huff writes the following: 

We play a different (or rather, tighter) gate than does Escher. We work with only 
A tiles (i.e., congruent tiles of the same handedness). We do not use, as he does, A and A' 
tiles (i.e., congruent tiles of both handednesses). Finally, we don't use A and B tiles (i.e., 
two different interlocking tiles), since two such tiles can always be seen as subdivisions 
of a single larger tile. 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE: 10-1 Fylfor Flipflop by Fred 
Watts. Created in the studio of William 

FIGURE: 10-2 Crossover by Richard 
Long. Created in the studio of William 

Parquet Deformations: A Subtle, Intricate Art Form 


The other basic element is the repertoire of standard deforming devices. Typical devices 

• lengthening or shortening a line; 

• rotating a line; 

• introducing a "hinge" somewhere inside a line segment so that it can "flex"; 

• introducing a "bump" or "pimple" or "tooth" (a small intrusion or extrusion 
having a simple shape) in the middle of a line or at a vertex; 

• shifting, rotating, expanding, or contracting a group of lines that form a natural 

and variations on these themes. To understand these descriptions, you must realize that a 
reference to "a line" or "a vertex" is actually a reference to a line or vertex inside a unit 
cell, and therefore, when one such line or vertex is altered, all the corresponding lines or 
vertices that play the same role in the copies of that cell undergo the same change. Since 
some of those copies may be at 90 degrees (or other angles) with respect to the master 
cell, one locally innocent-looking change may induce changes at corresponding spots, 
resulting in unexpected interactions whose visual consequences may be quite exciting. 

* * * 

Without further ado, let us proceed to examine some specific pieces. Look at the 
one called "Fylfot Flipflop" (Figure 10-1). It is an early one, executed in 1963 by Fred 
Watts at Carnegie-Mellon. If you simply let your eye skim across the topmost line, you 
will get the distinct sensation of scanning a tiny mountain range. At either edge, you 
begin with a perfectly flat plain, and then you move into gently rolling hills, which 
become taller and steeper, eventually turning into jagged peaks; then past the centerpoint, 
these start to soften into lower foothills, which gradually tail off into the plain again. This 
much is obvious even upon a casual glance. Subtler to see is the line just below, whose 
zigging and zagging is 180 degrees out of phase with the top line. Thus notice that in the 
very center, that line is completely at rest: a perfectly horizontal stretch flanked on either 
side by increasingly toothy regions. Below it there are seven more horizontal lines. Thus 
if one completely filtered out the vertical lines, one would see nine horizontal lines 
stacked above one another, the odd- numbered ones jagged in the center, the even- 
numbered ones smooth in the center. 

Now what about the vertical lines? Both the lefthand and righthand borderlines 
are perfectly straight vertical lines. However, their immediate neighbors are as jagged as 
possible, consisting of repeated 90-degree bends, back and forth. Then the next vertical 
line nearer the center is practically straight up and down again. Then there is a wavy one 
again, and so on. As 

Parquet Deformations: A Subtle, Intricate Art Form 


you move across the picture, you see that the jagged ones gradually get less jagged and 
the straight ones get increasingly jagged, so that in the middle the roles are completely 
reversed. Then the process continues, so that by the time you've reached the other side, 
the lines are back to normal again. If you could filter out the horizontal lines, you would 
see a simple pattern of quite jaggy lines alternating with less jaggy lines. 

When these two extremely simple independent patterns-the horizontal and the 
vertical-are superimposed, what emerges is an unexpectedly rich perceptual feast. At the 
far left and right, the eye picks out fylfots-that is, swastikas-of either handedness 
contained inside perfect squares. In the center, the eye immediately sees that the central 
fylfots are all gone, replaced by perfect crosses inside pinwheels. 

And then a queer perceptual reversal takes place. If you just shift your focus of 
attention diagonally by half a pinwheel, you will notice that there is a fylfot right there 
before your eyes ! In fact, suddenly they appear all over the central section where before 
you'd been seeing only crosses inside pinwheels! And conversely, of course, now when 
you look at either end, you'll see pinwheels everywhere with crosses inside them. No 
fylfots ! It is an astonishingly simple design, yet this effect catches nearly everyone really 
off guard. 

This is a simple example of the ubiquitous visual phenomenon called regrouping, 
in which the boundary line of the unit cell shifts so that structures jump out at the eye that 
before were completely submerged and invisible -while conversely, of course, structures 
that a moment ago were totally obvious have now become invisible, having been split 
into separate conceptual pieces by the act of regrouping, or shift of perceptual 
boundaries. It is both a perceptual and conceptual phenomenon, a delight to that subtle 
mixture of eye and mind that is most sensitive to pattern. 

For another example of regrouping, take a look at "Crossover" (Figure 10-2), also 
executed at Carnegie-Mellon in 1963 by Richard Lane. Something really amazing 
happens in the middle, but I won't tell you what. Just find it yourself by careful looking. 

By the way, there are still features left to be explained in "Fylfot Flipflop". At 
first it appears to be mirror- symmetric. For instance, all the fvlfots at the left end are 
spinning counterclockwise, while all the ones at the right end are spinning clockwise. So 
far, so symmetric. But in the middle, all the fylfots go counterclockwise. This surely 
violates the symmetry. Furthermore, the one-quarter-way and three-quarter-way stages of 
this deformation, which ought to be mirror images of each other, bear no resemblance at 
all to each other. Can you figure out the logic behind this subtle asymmetry between the 
left and right sides? 

This piece also illustrates one more way in which parquet deformations resemble 
music. A unit cell-or rather, a vertical cross-section consisting of a stack of unit cells-is 
analogous to a measure »in music. The regular pulse of a piece of music is given by the 
repetition of unit cells across the page. 

Parquet Deformations: A Subtle, Intricate Art Form 


And the flow of a melodic line across measure boundaries is modeled by the flow of a 
visual line-such as the mountain range lines-across many unit cells. 

* * * 

Bach's music is always called up in discussions of the relationship of 
mathematical patterns to music, and this occasion is no exception. I am reminded 
especially of some of his texturally more uniform pieces, such as certain preludes from 
the Well-Tempered Clavier, in which in each measure there is a certain pattern executed 
once or twice, possibly more times. From measure to measure this pattern undergoes a 
slow metamorphosis, meandering over the course of many measures from one region of 
harmonic space to far distant regions and then slowly returning via some circuitous route. 
For specific examples, you might listen to (or look at the scores of): Book I, numbers 1, 
2; Book II, numbers 3, 15. Many of the other preludes have this feature in places, though 
not for their entirety. 

Bach seldom deliberately set out to play with the perceptual systems of his 
listeners. Artists of his century, although they occasionally played perceptual games, 
were considerably less sophisticated about, and less fascinated with, issues that we now 
deem part of perceptual psychology. Such phenomena as regrouping would undoubtedly 
have intrigued Bach, and I for one sometimes wish that he had known of and been able to 
try out certain effects-but then I remind myself that whatever time Bach might have spent 
playing with new-fangled ideas would have had to be subtracted from his time to produce 
the masterpieces that we know and love, so why tamper with something that precious? 

On the other hand, I don't find that argument 100 percent compelling. Who says 
that if you're going to imagine playing with the past, you have to hold the lifetimes of 
famous people constant in length? If we can imagine telling Bach about perceptual 
psychology, why can't we also imagine adding a few extra years to his lifetime to let him 
explore it? After all, the only divinely imposed (that is, absolutely unslippable) constraint 
on Bach's years is that they and Mozart's years add up to 100, no? So if we award Bach 
five extra ones, then we merely take five years away from Mozart. It's painful, to be sure, 
but not all that bad. We could even let Bach live to 100 that way! (Mozart would never 
have existed.) It starts to get a little questionable if we go much beyond that point, 
however, since it is not altogether clear what it means to live a negative number of years. 

Although it is difficult to imagine and impossible to know what Bach's music 
would have been like had he lived in the twentieth century, it is certainly not impossible 
to know what Steve Reich's music would have been like, had he lived in this century. In 
fact, I'm listening to a record of it right now (or at least I would have been if I hadn't 
gotten distracted by this radio program). Now Reich's is music that really is conscious of 
perceptual psychology. All the way through, he plays with perceptual shifts and 

Parquet Deformations: A Subtle, Intricate Art Form 


ambiguities, pivoting from one rhythm to another, from one harmonic origin to another, 
constantly keeping the listener on edge and tingling with nervous energy. Imagine a piece 
like Ravel's "Bolero", only with a much finer grain size, so that instead of roughly a one- 
minute unit cell, it has a three-second unit cell. Its changes are tiny enough that 
sometimes you barely can tell it is changing at all, while other times the changes jump 
out at you. What Reich piece am I listening to (or rather, would I be listening to if I 
weren't still listening to this radio program)? Well, it hardly matters, since most of them 
satisfy this characterization, but for the sake of specificity you might try "Music for a 
Large Ensemble", "Octet", "Violin Phase", "Vermont Counterpoint", or his recent choral 
work "Tehillim". 

* * * 

Let us now return to parquet deformations. "Dizzy Bee" (Figure 10-3), executed 
by Richard Mesnik at Carnegie-Mellon in 1964, involves —perceptual tricks of another 
sort. The left side looks like a perfect honeycomb or-somewhat less poetically-a perfect 
bathroom floor. However, as we move rightward, its perfection seems cast in doubt as the 
rigidity of the lattice gives way to rounder-seeming shapes. Then we notice that three of 
them have combined to form one larger shape: a super hexagon made up of three rather 
squashed pentagons. The curious thing is that if we now sweep our eyes right to left, back 
to the beginning, we can no longer 

FIGURE 10-3. Dizzy Bee, by Richard Mesnik. Created in the studio of William Huff 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-4. Consternation, by Scott Grady. Created in the studio of William Huff 

see the left side in quite the way we saw it before. The small hexagons now are constantly 
grouping themselves into threes, although the grouping changes quickly. We experience 
"flickering clusters" in our minds, in which groups form for an instant and then disband, 
their components immediately regrouping in new combinations, and so on. The poetic 
term "flickering clusters" comes from a famous theory of how water molecules behave, 
the bonding in that case coming from hydrogen bonds rather than mental ones. (See the 
P.S. to Chapter 26.) 

Even more dizzying, perhaps, than "Dizzy Bee" is "Consternation" (Figure 10-4), 
executed by Scott Grady of SUNY at Buffalo in 1977. This is another parquet 
deformation in which hexagons. ,and cubes vie for perceptual supremacy. This one is so 
complex and agitated in appearance that I scarcely dare to attempt an analysis. In its 
intermediate regions, I find the same extremely exciting kind of visual pseudo-chaos as in 
Escher's best deformations. 

Perhaps irrelevantly, but I suspect not, the names of many of these studies remind 
me of pieces by Zez Confrey, a composer most famous during the twenties for his 
novelty piano solos such as "Dizzy Fingers", "Kitten on the Keys", and-my favorite- 
"Flutter by, Butterfly". Confrey specialized in pushing rag music to its limits without 
losing musical charm, and some of the results seem to me to have a saucy, dazzling 
appeal not unlike the jazzy appearance of this parquet deformation, and others. 

The next parquet deformation, "Oddity out of Old Oriental Ornament" 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-5. Oddity out of Old Oriental Ornament, by Francis O'Donnell. Created in 
the studio of William Huff ( 1966 ). 

(Figure 10-5), executed by Francis O'Donnell at Carnegie-Mellon in 1966, is based on an 
extremely simple principle: the insertion of a "hinge" in one single line segment, and 
subsequent flexing of the segment at that hinge! The reason for the stunningly rich results 
is that the unit cell that creates the tessellation occurs both vertically and horizontally, so 
that flexing it one way induces a crosswise flexing as well, and the two flexings combine 
to yield this curious and unexpected pattern. 

Another one that shows the amazing results of an extremely simple but carefully 
chosen tranformation principle is "Y Knot" (Figure 10-6), 

FIGURE 10-6. Y Knot, hi Leland Chen. Created in the studio of William Huff 


executed by Leland Chen at SUNY at Buffalo in 1977. If you look at it with full 
attention, you will see that its unit cell is in the shape of a three-bladed propeller, and that 
unit cell never changes whatsoever in shape. All that does change is the Y' lodged tightly 
inside that unit cell. And the only way that Y' changes is by rotating clockwise very 
slowly! Admittedly, in the final stages of rotation, this forces some previously constant 
line segments to extend themselves a little bit, but this does not change the outline of the 
unit cell whatsoever. What well-chosen simplicity can do! 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-7. Crazy Cogs, by Arne Larson. Created in the studio of William Huff 


Three of my favorites are "Crazy Cogs" (Figure 10-7, done by Arne Larson, Carnegie- 
Mellon, 1963), "Trifoliolate" (Figure 10-8, done by Glen Paris, Carnegie-Mellon, 1966), 
and "Arabesque" (Figure 10-9, done by Joel Napach, SUNY at Buffalo, 1979). They all 
share the feature of getting more and more intricate as you move rightward. Most of the 
earlier ones we've seen don't have this extreme quality of irreversibility-that is, the 
ratcheted quality that signals that an evolutionary process is taking place. I can't help 
wondering if the designers didn't feel that they'd painted themselves into a corner, 
especially in the case of "Arabesque". Is there any way you can back out of that super- 
tangle except by retrograde motion-that is, retracing your steps? I suspect there is, but I 
wouldn't care to try to discover it. 

To contrast with this, consider "Razor Blades", an extended study in relative 
calmness (Figure 10-10). It was done at Carnegie-Mellon in 1966, but unfortunately it is 
unsigned. Like the first one we discussed, this one can be broken up into very long 
waving horizontal lines and vertical structures crossing them. It's a little easier to see 
them if you start at the right side. For instance, you can see that just below the top, there 
is a long snaky line 

FIGURE 10-8. Trifoliolate, by Glen Paris. Created n( //), hub,-I_ William Huff (1966). 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-9: Arabesque by Joel Napach Created in the studio of William Huff (1979). 

Parquet Deformations: A Subtle, Intricate Art Form 199 

FIGURE 10-10: Razaor Blades (unsigned) Created in the studio of William Huff (1966). 

Parquet Deformations: A Subtle, Intricate Art Form 192 

with numerous little "nicks" in it, undulating its way leftwards and in so doing shedding 
some of those nicks, so that at the very left edge it has degenerated into a perfect "square 
wave", as such a periodic wave form is called in Fourier analysis. Complementing this 
horizontal structure is a similar vertical structure that is harder to describe. The thought 
that comes to my mind is that of two very ornate, rather rectangular hourglasses with 
ringed necks, one on top of the other. But you can see for yourself. 

As with "Fylfot Flipflop" (Figure 10-1), each of these patterns by itself is 
intriguing, but of course the real excitement comes from the daring act of superimposing 
them. Incidentally, I know of no piece of visual art that better captures the feeling of 
beauty and intricacy of a Steve Reich piece, created by slow "adiabatic" changes floating 
on top of the chaos and dynamism of the lower-level frenzy. Looking back, I see I began 
by describing this parquet deformation as "calm". Well, what do you know? Maybe I 
would be a good candidate for inclusion in The New Yorker's occasional notes titled 
"Our Forgetful Authors". 

More seriously, there is a reason for this inconsistency. One's emotional response 
to a given work of art, whether visual or musical, is not static and unchanging. There is 
no way to know how you will respond, the next time you hear or see one of your favorite 
pieces. It may leave you unmoved, or it may thrill you to the bones. It depends on your 
mood, what has recently happened, what chances to strike you, and many other subtle 
intangibles. One's reaction can even change in the course of a few minutes. So I won't 
apologize for this seeming lapse. 

Let us now look at "Cucaracha" (Figure 10-11), executed in 1977 by Jorge 
Gutierrez at SUNY at Buffalo. It moves from the utmost geometricity-a lattice of perfect 
diamonds-through a sequence of gradually more arbitrary modifications until it reaches 
some kind of near-freedom, a dance of strange, angular, quasi-organic forms. This 
fascinates me. Is entropy increasing or decreasing in this rightward flow toward freedom? 

A gracefully spiky deformation is the one wittily titled "Beecombing Blossoms" 
(Figure 10-12), executed this year by Laird Pylkas at SUNY at Buffalo. Huff told me that 
Pylkas struggled for weeks with this one, and at the end, when she had satisfactorily 
resolved her difficulties, she mused, "Why is it that the obvious ideas always take so long 
to discover?" 

* * * 

As our last study, let us take "Clearing the Thicket" (Figure 10-13), executed in 1979 by 
Vincent Marlowe at SUNY at Buffalo, which involves a mixture of straight lines and 
curves, right angles and cusps, explicit squarish swastikoids and implicit circular holes. 
Rather than demonstrate my inability to analyze the ferocious complexity of this design, I 
would like to use it as the jumping-off point for a discussion of computers and creativity - 
one of my favorite hobbyhorses. 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-11: cucuracha by Jorge 
Gutierrez Created in the studio of 
William Huff (1977). 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-13. Clearing the Thicket, by Vincent Marlowe. Created in the studio of 
William Huff (1979). 

Some totally new things are going on in this parquet deformation-things that have 
not appeared in any previous one. Notice the hollow circles on the left side that shrink as 
you move rightward; notice also that on the right side there are hollow "anticircles" 
(concave shapes made from four circular arcs turned inside out) that shrink as you move 
leftward. Now, according to Huff, such an idea had never appeared in any previously 
created deformations. This means that something unusual happened here- something 
genuinely creative, something unexpected, unpredictable, surprising, intriguing-and not 
least, inspiring to future creators. 

So the question naturally arises: Would a computer have been able to invent this 
parquet deformation? Well, put this way it is a naive and ill-posed question, but we can 
try to make some sense of it. The first thing to point out is that, of course, the phrase "a 
computer" refers to nothing more than an inert hunk of metal and semiconductors. To go 
along with this bare computer, this hardware, we need some software and some energy. 
The former is a specific pattern inserted into the matter binding it with constraints yet 
imbuing it with goals; the latter is what breathes "life" into it, making it act according to 
those goals and constraints. 

The next point is that the software is what really controls what the machine does; 
the hardware simply obeys the software's dictates, step by step. And yet, the software 
could exist in a number of different "instantiations "-that is, realizations in different 
computer languages. What really counts about the software is not its literal aspect, but a 
more abstract, general, overall "architecture", which is best described in a nonformal 
language, such as English. We might say that the plan, the sketch, the central idea of a 
program is what we are talking about here-not its final realization in some specific formal 
language or dialect. That is something we can leave to apprentices to carry out, after we 
have presented them with our informal sketch. 

So the question actually becomes less mundane- sounding, more theoretical and 
philosophical: Is there an architecture to creativity? Is there a 

Parquet Deformations: A Subtle, Intricate Art Form 


plan, a scheme, a set of principles that, if elucidated clearly, could account for all the 
creativity embodied in the collection of all parquet deformations, past, present, and 

* * * 

Note that we are asking about the collection of parquet deformations, not about 
some specific work. It is a truism that any specific work of art can be recreated, even 
recreated in various slightly novel ways, by a programmed computer. 

For example, the Dutch artist Piet Mondrian evolved a highly idiosyncratic, 
somewhat cryptic style of painting over a period of many years.. You can see, if you trace 
his development over the course of time, exactly where he came from and where he was 
headed. But if you focus in on just a single Mondrian work, you cannot sense this stylistic 
momentum -this quality of dynamic, evolving style that any great artist has. Looking at 
just one work in isolation is like taking a snapshot of something in motion: you capture 
its instantaneous position but not its momentum. Of course, the snapshot might be 
blurred, in which case you get a sense of the momentum but lose information about the 
position. But when you are looking at just a single work of art, there is no mental blurring 
of its style with that of recent works or soon-to-come works; you have exact position 
information ("What is the style now ?"), but no momentum information ("Where was it 
and where is it going?"). 

Some years ago, the mathematician and computer artist A. Michael Noll took a 
single Mondrian painting-an abstract, geometric study with seemingly random elements- 
and from it extracted some statistics concerning the patterns. Given these statistics, he 
then programmed a computer to generate numerous "pseudo-Mondrian paintings" having 
the same or different values of these randomness-governing parameters. (See Figure 10- 
14.) Then he showed the results to naive viewers. The reactions were interesting, in that 
more people preferred one of the pseudo-Mondrians to the genuine Mondrian! 

This is quite amusing, even provocative, but it also is a warning. It proves that a 
computer can certainly be programmed, after the fact, to imitate-and well-mathematically 
capturable stylistic aspects of a given work. But it also warns us: Beware of cheap 
imitations ! 

Consider the case of parquet deformations. There is no doubt that a computer 
could be programmed to do any specific parquet deformation-or minor variations on it- 
without too much trouble. There just aren't that many parameters to any given one. But 
the essence of any artistic act resides not in selecting particular values for certain 
parameters, but far deeper: it's in the balancing of a myriad intangible and mostly 
unconscious mental forces, a judgmental act that results in many conceptual choices that 
eventually add up to a tangible, perceptible, measurable work of art. 

Parquet Deformations: A Subtle, Intricate Art Form 


FIGURE 10-14. One genuine Mondrian plus three computer imitations. Can you spot the 
Mondrian? If you rotate the figure so that east becomes south, it will be the one in the 
northwest corner. The Mondrian, done in 1917, is titled Composition with Lines; the 
three others, done in 1965, comprise a work called Computer Composition with Lines, 
and were created by a computer at Bell Telephone Laboratories at the behest of 
computer tamer A. Michael Noll. The subjectively "best "picture was found through 
surveys; it is the one diagonally opposite thegenuine Mondrian! 

Once the finished work exists, scholars looking at it may seize upon certain 
qualities of it that lend themselves easily to being parametrized. Anyone can do statistics 
on a work of art once it is there for the scrutiny, but the ease of doing so can obscure the 
fact that no one could have said, a priori, what kinds of mathematical observables would 
turn out to be relevant to the capturing of stylistic aspects of the as-yet-unseen work of 

Huffs own view on this question of mechanizing the art of parquet deformations 
closely parallels mine. He-believes that some basic principles could be formulated at the 
present time enabling a computer to come up 

Parquet Deformations: A Subtle, Intricate Art Form 


with relatively stereotyped yet novel creations of its own. But, he stresses, his students 
occasionally come up with rule-breaking ideas that noneth°less enchant the eye for 
deeper reasons than he has so far been able to verbalize. And so, this way, the set of 
explicit rules gets gradually increased. 

Comparing the creativity that goes into parquet deformations with the creativity 
of a great musician, Huff has written: 

I don't know about the consistency of the genius of Bach, but I did work 
with the great American architect Louis Kahn (1901- 1974) and suppose that it 
must have been somewhat the same with Bach. That is, Kahn, out of moral, 
spiritual, and philosophical considerations, formulated ways he would and ways he 
would not do a thing in architecture. Students came to know many of his ways, and 
some of the best could imitate him rather well (though not perfectly). But as Kahn 
himself developed, he constantly brought in new principles that brought new 
transformations to his work; and he even occasionally discarded an old rule. 
Consequently, he was always several steps ahead of his imitators who knew what 
was but couldn't imagine what will be. So it is that computer-generated "original' 
Bach is an interesting exercise. But it isn't Bach -that unwritten work that Bach 
never got to, the day after he died. 

The real question is: What kind of architecture is responsible for all of these 
ideas? Or is there any one architecture that could come up with them all? I would say that 
the ability to design good parquet deformations is probably deceptive, in the same way as 
the ability to play good chess is: it looks more mathematical than it really is. 

A brilliant chess move, once the game is' over and can be viewed in retrospect, 
can be seen as logical- as "the correct thing to do in that situation". But brilliant moves do 
not originate from the kind of logical analysis that occurs after the game; there is no time 
during the game to check out all the logical consequences of a move. Good chess moves 
spring from the organization of a good chess mind: a set of perceptions arranged in such a 
way that certain kinds of ideas leap to mind when certain subtle patterns or cues are 
present. This way that perceptions have of triggering old and buried memories underlies 
skill in any type of human activity, not only chess. It's just that in chess the skill is 
particularly deceptive, because after the fact, it can all be justified by a logical analysis, a 
fact that seems to hint that the original idea came from logic. 

Writing lovely melodies is another one of those deceptive arts. To the 
mathematically inclined, notes seem like numbers and melodies like number patterns. 
Therefore all the beauty of a melody seems as if it ought to be describable in some simple 
mathematical way. But so far, no formula has produced even a single good melody. Of 
course, you can look back at any melody and write a formula that will produce it and 
variations on it. But that is retrospective, not prospective. Lovely chess moves and lovely 
melodies (and lovely theorems in mathematics, etc.) have this in common: every one has 
idiosyncratic nuances that seem logical a posteriori but that are not easy to 

Parquet Deformations: A Subtle, Intricate Art Form 


anticipate a priori. To the mathematical mind, chess-playing skill and melody-writing 
skill and theorem-discovering skill seem obviously formalizable, but the truth turns out to 
be more tantalizingly complex than that. Too many subtle balances are involved. 

So it is with parquet deformations, I reckon. Each one taken alone is in some 
sense mathematical. However, taken as a class, they are not mathematical. This is what's 
tricky about them. Don't let the apparently mathematical nature of an individual one fool 
you, for the architecture of a program that could create all these parquet deformations and 
more good ones would have to incorporate computerized versions of concepts and 
judgments-and those are much more elusive and complex things than are numbers. In a 
way, parquet deformations are an ideal case with which to make this point about the 
subtlety of art, for the very reason that each one on its own appears so simple and rule- 

At this point, many critics of computers and artificial intelligence, eager to find 
something that "computers can't do" (and never will be able to do) often jump too far: 
they jump to the conclusion that art and, more generally, creativity, are fundamentally 
uncomputerizable. This is hardly the implied conclusion! The implied conclusion is just 
this: that for computers to act human, we will have to wait until we have good computer 
models of such human things as perception, memory, mental categories, learning, and so 
on. We are a long way from that. But there is no reason to assume that those goals are in 
principle unattainable, even if they remain far off for a long time. 

* * * 

I have been playing with the double meaning, in this column, of the term 
"architecture": it means both the design of a habitat and the abstract essence of a grand 
structure of any sort. The former has to do with hardware and the latter with software. In 
a certain sense, William Huff is a professor of both brands of architecture. Obviously his 
professional training is in the design of "hardware": genuine habitats for humans, and he 
is in a school where that is what they do. But he is also in the business of forming, in the 
minds of his students, a softer type of architecture: the mental architecture that underlies 
the skill to create beauty. Fortunately for him, he can take for granted the whole 
complexity of a human brain as his starting point upon which to build this architecture. 
But even so, there is a great art to instilling a sensitivity for beauty and novelty. 

When I first met William Huff and saw how abstract and seemingly impractical 
were the marvelous works produced in his design studio ranging from parquet 
deformations to strange ways of slicing a cube to gestalt studies using thousands of dots 
to eye-boggling color patterns-I at first wondered why this man was a professor of 
architecture. But after conversing with him and his colleagues, my horizons were 
extended about the nature of their discipline. 

Parquet Deformations: A Subtle, Intricate Art Form 


The architect Louis Kahn had great respect for the work of William Huff, and it is 
with his words that I would like to conclude: 

What Huff teaches is not merely what he has learned from someone else, 
but drawn from his natural gifts and belief in their truth and value. In my 
belief what he teaches is the introduction to discipline underlying shapes and 
rhythms, which touches the arts of sight, the arts of sound, and the arts of structure. 
It teaches students of drawing to search for the abstract and not the representational. 
This is so good as a reminder of order for the instructors/architectural sketchers 
(like me), and so good especially for the student sketchers without background. It is 
the introduction to exactitudes of the kind that instill the religion of the ordered 

Post Scriptum. 

"The religion of the ordered path" -a lovely phrase. I did not know at the time this 
column was written that it would be my last full column (the one reporting on the results 
of the Luring Lottery, here Chapter 31, was only a half-column). Both William Huff and I 
were pleased with my bowing out this way, and I was especially pleased with the phrase 
with which I bowed out. Though ambiguous, it captures much of the spirit that I 
attempted to get across in all my columns: dedicated questing after patterned beauty, and 
particularly after the reasons that certain particular patterns are beautiful. 

In this column, I repeatedly claimed that it is relatively easy to make a computer 
program that creates attractive art within a formula, but not at all easy to make a 
computer program that constantly comes up with novelty. Some people familiar with the 
computer art produced in the last couple of decades might pick a fight with me over this. 
They might point to complex patterns produced by simple algorithms, and then add that 
there are certain simple algorithms which, when you change merely a few parameters, 
come up with astonishingly different patterns that no human would be likely to recognize 
as being each other's near kin. An example is a very simple program I know, which fills a 
screen with rapidly changing sixfoldsymmetric dot-patterns that look like magnified 
snowflakes; in just a few seconds, any given pattern will dissolve and be replaced by an 
unbelievably different sixfold-symmetric pattern. I have stood transfixed at a screen 
watching these patterns unfold one after another, unable to anticipate in the slightest what 
will happen next-and yet knowing that the program itself is only a few lines long! I have 
seen small changes in mathematical formulas produce enormous visual changes in what 
those formulas represent, graphically. 

The trouble is, these parameter-based changes-knob-twiddlings, as they are called 
in Chapters 12 and 13-are of a different nature than the kinds 

Parquet Deformations: A Subtle, Intricate Art Form 


of novel ideas people' come up with when they vary a given idea. For a machine to make 
simple variants of a given design, it must possess an algorithm for making that design 
which has explicit parameters; those parameters are then modifiable, as with the pseudo- 
Mondrian paintings. But the way people make variations is quite different. They look at 
some creation by an artist (or computer), and then they abstract from it some quality that 
they observe in the creation itself (not in some algorithm behind it). This newly 
abstracted quality may never have been thought of explicitly by the artist (or programmer 
or computer), yet it is there for the seeing by an acute observer. This perceptual act gets 
you more than half the way to genuine creativity; the remainder involves treating this 
new quality as if it 

FIGURE 10-15. 1 at the Center, by David Oleson. Created in the studio of William Huff 

Parquet Deformations: A Subtle, Intricate Art Form 


were an explicit knob: "twiddling" it as if it were a parameter that had all along been in 
the program that made the creation. 

That way, the perceptual process is intimately linked up with the generative 
process: a loop is closed in which perceptions spark new potentials and experimentation 
with new potentials opens up the way for new perceptions. The element lacking in 
current computer art is the interaction of perception with generation. Computers do not 
watch what they do; they simply do it. (See Chapter 23 for more on the idea of self- 
watching computers.) When programs are able to look at what they've done and perceive 
it in ways that they never anticipated, then you'll start to get close to the kinds of insight- 
giving disciplined exercises that Louis Kahn was speaking of when he wrote of the 
"religion of the ordered path". 

* * * 

One of my favorite parquet deformations is called "I at the Center" (Figure 10- 
15), and was done by David Oleson at Carnegie-Mellon in 1964. This one violates the 
premise with which I began my article: one-dimensionality. It develops its central theme- 
the uppercase letter T -along two perpendicular dimensions at once. The result is one of 
the most lyrical and graceful compositions that I have seen in this form. 

I am also pleased by the metaphorical quality it has. At the very center of a mesh 
is an I-an ego; touching it are other things-other I's-very much like the central I, but not 
quite the same and not quite as simple; then as one goes further and further out, the 
variety of Is multiplies. To me this symbolizes a web of human interconnections. Each of 
us is at the very center of our own personal web, and each one of us thinks, "I am the 
most normal, sensible, comprehensible individual." And our identity-our "shape" in 
personality space-springs largely from the way we are embedded in that network- which is 
to say, from the identities (shapes) of the people we are closest to. This means that we 
help to define others' identities even as they help to define our own. And very simply but 
effectively, this parquet deformation conveys all that, and more, to me. 

Parquet Deformations: A Subtle, Intricate Art Form 



Stuff and Nonsense 

December, 1982 

Buz, quoth the blue fly, 
Hum, quoth the bee, 
Buz and hum they cry, 
And so do we: 

In his ear, in his nose, thus, do you 

He ate the dormouse, else it was he. 

-Ben Jonson 

Eh? What does this mean? What is its point? This little nonsense poem, written 
around 1600, begins with an image of insects, slides into an image of someone's face, and 
concludes with an uncertain reference to the devourer of a certain rodent. Although it 
makes little sense, it is still somehow enjoyable. It reminds us of a nursery rhyme. It is 
comfortable, cute, droll. 

Nonsense has been around for a long time. Its style and tone have changed over 
the centuries, however. The path of development of nonsense is interesting to trace. What 
marks something off as being nonsense? When does nonsense spill over into sense, or 
vice versa? Where are the borderlines between nonsense and poetry? These are issues to 
be explored in this column. 

A century and a half after Jonson wrote his poem, an English actor named Charles 
Macklin became notorious for boasting that he could memorize any passage on one 
hearing. To challenge Macklin, his friend the dramatist Samuel Foote wrote the following 
odd passage: 

So she went into the garden to cut a cabbage-leaf to make an apple-pie; and, at the 
same time, a great she-bear coming up the street pops its head into the shop-What! 
no soap? So he died; and, she very imprudently married the barber: and there were 
present the Picninnies, and the Joblilies, and the Garyulies, and the great 
Panjandrum himself, with the little round button at 

Stuff and Nonsense 


top. And they all fell to playing the game of v catch as catch can', till the gunpowder 
ran out at the heels of their boots. 

Full of non sequiturs and awkward, choppy sentences, this must have been an excellent 
challenge for Macklin. Unfortunately, we have no record as to how he fared on first 
hearing it, but we do know that he enjoyed the passage immensely, and went around 
reciting it with great gusto for years thereafter. 

In the nineteenth century, the reigning monarchs of nonsense were Lewis Carroll 
and Edward Lear. Everyone knows Carroll's "Jabberwocky", "Tweedledum and 
Tweedledee", and "The Walrus and the Carpenter"; most people have heard of Lear's 
"The Owl and the Pussycat". Fewer have heard of Lear's "The Pobble Who Has No Toes" 
or "The Dong with the Luminous Nose". Carroll and Lear both enjoyed inventing strange 
words and using them innocently, as if they were commonplace. Their nonsense was 
expressed largely in poems, where they indulged in much alliteration, many internal 
rhymes, catchy rhythms, and off beat imagery. Rather than exhibit works of those two 
authors, I have instead chosen, to represent their era, an anonymous poem with some of 
the same charming qualities: 


In loopy links the canker crawls, 
Tads twiddle in their 'polian glee, 
Yet sinks my heart as water falls. 
The loon that laughs, the babe that bawls, 
The wedding wear, the funeral palls, 
Are neither here nor there to me. 
Of life the mingled wine and brine 
I sit and sip pipslipsily. 

Many of Carroll's nonsense poems were parodies of popular songs or ditties of his 
day. Ironically, the parodies are remembered, and the things that triggered them are 
mostly completely forgotten. Carroll loved to poke fun, in his gentle manner, at the stuffy 
mores and hypocritical mannerisms of society. One of the characteristics of "genteel" 
poetry of the nineteenth century was its precious use of classical literary allusions. Carroll 
seldom parodies this quality, but Charles Battell Loomis, a little-known writer, admirably 
caught the style in this poem. 


Oh, limpid stream of Tyrus, now I hear 

The pulsing wings of Armageddon's host, 

Clear as a colcothar and yet more clear 

(Twin orbs, like those of which the Parsees boast); 

Stuff and Nonsense 


Down In thy pebbled deeps in early spring 
The dimpled naiads sport, as in the time 
When Ocidelus with untiring wing 
Drave teams of prancing tigers, 'mid the chime 

Of all the bells of Phicol. Scarcely one 
Peristome veils its beauties now, but then 
Like nascent diamonds, sparkling in the sun, 
Or sainfoin, circinate, or moss in marshy fen. 

Loud as the blasts of Tubal, loud and strong, 
Sweet as the songs of Sappho, aye more sweet; 
Long as the spear of Arnon, twice as long, 
What time he hurled it at King Pharaoh's feet. 

This poem has the curious quality that when you read it, you feel that surely it makes 
sense-perhaps another reading will reveal it to you. And then you read it again and find 
that same head-scratching feeling comes back to you. This is a problem with much 
modern poetry: It is very hard to be certain that you're not simply being taken for a ride 
by the poet, sucked in by some practical joker who has actually nothing in mind except 
tricking readers into thinking there is profound meaning where there is none. 

The limerick is a form of poetry often featured in nonsense anthologies, probably 
because it is a playful form. However, very few limericks make no sense. They may 
involve mild impossibilities, such as a young woman who travels faster than the speed of 
light, or other more off-color feats, but in actuality, limericks are seldom nonsensical. 
One limerick that, in its own way, is pure nonsense is the following gem, by W. S. 
Gilbert (of Gilbert and Sullivan): 

There was an old man of St. Bees, 
Who was stung in the arm by a wasp. 
When asked, "Does it hurt?" 
He replied, "No, it doesn't 
I'm so glad it wasn't a hornet." 

Why do I call this "nonsense"? Well, if it were a prose sentence, nothing about it would 
attract much attention, except perhaps the name of the town. The nonsense is certainly 
not in the content, but in the way it utterly violates every standard set up for the limerick 
form. It doesn't rhyme, its meter is a little bumpy, and it has absolutely nothing funny in 
it- which is what makes it funny. And that makes it qualify as nonsense. 

Stuff and Nonsense 


Is nonsense always funny? Up until the twentieth century, it certainly seemed that 
way. In fact, nonsense and humor have traditionally been so closely allied that 
anthologies of nonsense seem to be composed largely of 

humorous passages of any sort whatsoever, irrespective of how sensible they are. But 
nonsense and humor took widely divergent paths in the early 

twentieth century. Perhaps the greatest nonsense writer who ever lived was Gertrude 
Stein, although she is seldom mentioned in this connection. Entire 

collections of nonsense have been published without featuring a single piece of her work. 
Her most audacious piece in this genre is a volume of nearly 

400 pages, modestly titled How to Write. Here is a sample taken from the Chapter called 
"Arthur a Grammar". 

Arthur a grammar. 
Questionnaire in question. 
What is a question. 
Twenty questions. 

A grammar is an astrakhan coat in black and other colors it is an obliging management of 
their requesting in indulgence made mainly as if in predicament as in occasion made 
plainly as if in serviceable does it shine. 
A question and answer. 
How do you like it. 

Grammar can be contained on account of their providing medaling in a ground of 
allowing with or without meant because which made coupled become blanketed with a 
candidly increased just as if in predicting example of which without meant and coupled 
inclined as much without meant to be thought as if it were as ably rested too. 
Considerable as it counted heavily in part. 

What is grammar when they make it round and round. As round as they are called. 

Did they guess whether they wished. A politely definitely detailed blame' of when they 


What is a grammar ordinarily. A grammar is question and answer answer undoubted 

however how and about. 

What is Arthur a grammar. 

Arthur is a grammar. 

Arthur a grammar. 

What can there be in a difficulty. 

Seriously in grammar. 

Thinking that a little baby can sigh. 

That is so much. 

Sayn can say only he is dead that he is interested in what is said. That is another in 

Better and flutter must and man can beam. 
Now think of seams. 

Embroidery consists in remembering that it is but what she meant. There an instance of 

Suppose embroidery is two and two. There can be reflected that it is as if it were having 
red about. 

This is an instance of having settled it. 

Stuff and Nonsense 


Grammar uses twenty in a predicament. Include hyacinths and mosses which 
grow to abundance. 

Grammar. In picking hyacinths quickly they suit admirably this makes 

grammar a preparation. Grammar unites parts and praises. In just this way. Grammar 


Grammar perhaps grammar. 

It is quite perplexing. It is simply an absurd string of non sequiturs, often totally 
lacking grammar, meandering randomly from "topic" to "topic". It 

is frustrating because there is nothing to grab onto. It is like trying to climb a mountain 
made of sand. 

Stein's experiments in absurdity parallel the Dadaist and Surrealist movements of 
roughly the same period, and they mark the trend away from 

exuberant and laughable nonsense toward troubling and, later, macabre nonsense. 
However, her work still has a freshness and silliness that makes it amusing and light 
rather than disturbing and heavy. 

* * * 

As we move further into the twentieth century, we encounter the philosophy of 
existentialism and the master expositor of existential malaise, 

Trish-born playwright Samuel Beckett. In Beckett's most famous play, Waiting for 
Godot, written in the early 1950's, the pathetic character ironically 

called "Lucky" has exactly one speech, coming in about the middle of the play. He has 
been being taunted by the other characters with cries of, pig!" and with sharp tugs on the rope around his neck, by which they are holding 
him. Eventually he is driven beyond the breaking point, and out pours an incoherent, 
wild, tormented piece of absolute confusion, resembling regurgitated academic 
coursework crossed with stock phrases and garbled memorized lists of one sort and 
another. Here is Lucky's famous speech: 

Given the existence as uttered forth in the public works of Puncher and Wattmann of a 
personal God quaquaquaqua with white beard quaquaquaqua outside time without 
extension who from the heights of divine apathia divine athambia divine aphasia loves us 
dearly with some exceptions for reasons unknown but time will tell and suffers like the 
divine Miranda with those who for reasons unknown but time will tell are plunged in 
torment plunged in fire whose fire flames if that continues and who can doubt it will fire 
the firmament that is to say blast hell to heaven so blue still and calm so calm with a calm 
which even though intermittent is better than nothing but not so fast and considering what 
is more that as a result of the labors left unfinished crowned by the Acacacacademy of 
Anthropopopometry of Essy-in-Possy of Testew and Cunard it is established beyond all 
doubt all other doubt than that which clings to the labors of men that as a result of the 
labors unfinished of Testew and Cunard it is established as hereinafter but not so fast for 
reasons unknown that as a result of the public works of Puncher and Wattmann it is 
established beyond 

Stuff and Nonsense 


all doubt that in view of the labors of Fartov and Belcher left unfinished for reasons 
unknown of Testew and Cunard left unfinished it is established what many deny that man 
in Possy of Testew and Cunard that man in Essy that man in short that man in brief in 
spite of the strides of alimentation and defecation wastes and pines wastes and pines and 
concurrently simultaneously what is more for reasons unknown in spite of the strides of 
physical culture the practice of sports such as tennis football running cycling gliding 
conating camogie skating tennis of all kinds dying flying sports of all sorts autumn 
summer winter winter tennis of all kinds hockey of all sorts penicilline and succedanea in 
a word I resume flying gliding golf over nine and eighteen holes tennis of all sorts in a 
word for reasons unknown in Feckham Peckham Fulham Clapham namely concurrently 
simultaneously what is more for reasons unknown but time will tell fades away I resume 
Fulham Clapham in a word the dead loss per head since the death of Bishop Berkeley 
being to the tune of one inch four ounce per head approximately by and large more or 
less to the nearest decimal good measure round figures stark naked in the stockinged feet 
in Connemara in a word for reasons unknown no matter what matter the facts are there 
and considering what is more much more grave that in the light of the labors lost of 
Steinweg and Peterman it appears what is more much more grave that in the light the 
light the light of the labors lost of Steinweg and Peterman that in the plains in the 
mountains by the seas by the rivers running water running fire the air is the same and 
then the earth namely the air and then the earth in the great cold the great dark the air and 
the earth abode of stones in the great cold alas alas in the year of their Lord six hundred 
and something the air the earth the sea the earth abode of stones in the great deeps the 
great cold on sea on land and in the air I resume for reasons unknown in spite of the 
tennis the facts are there but time will tell I resume alas alas on on in short in fine on on 
abode of stones who can doubt it I resume but not so fast I resume the skull fading fading 
fading and concurrently simultaneously what is more for reasons unknown in spite of the 
tennis on on the beard the flames the tears the stones so blue so calm alas alas on on the 
skull the skull the skull the skull in Connemara in spite of the tennis the labors abandoned 
left unfinished graver still abode of stones in a word I resume alas alas abandoned 
unfinished the skull the skull in Connemara in spite of the tennis the skull alas the stones 
Cunard tennis ... the stones ... so calm ... Cunard ... unfinished .. . 

Around the same time as Beckett was writing this play, or perhaps a few years 
earlier, the Welsh poet Dylan Thomas, intoxicated with the sounds of the English 
language, was creating poems that are remarkably opaque. Consider the opening two 
stanzas (there are five altogether) of his poem "How Soon the Servant Sun": 

How soon the servant sun, 
(Sir morrow mark), 

Can time unriddle, and the cupboard stone, 

(Fog has a bone 

He'll trumpet into meat), 

Unshelve that all my gristles have a gown 

And the naked egg stand straight, 

Stuff and Nonsense 


Sir morrow at his sponge, 

(The wound records), 

The nurse of giants by the cut sea basin, 

(Fog by his spring 

Soaks up the sewing tides), 

Tells you and you, my masters, as his strange 

Man morrow blows through food. 

Poems like this make me want to cry out that this emperor has no clothes. As far as I can 
discern, close to no meaning can be pulled from these lines. But how can I be sure? I 
cannot. All I can say is that it would probably take such a great effort to "decode" these 
lines that I suspect very, very few people would be willing to make it. 

* * * 

It is perhaps not so well known that the American singer Bob Dylan (whose name 
was inspired by that of Dylan Thomas) is also an author of inspired nonsense. Some of 
his nonsense written during the 1960's was collected and published in a book called 
Tarantula. Its tone is often bitter and it exudes the confused mood of those difficult years. 
Most of the pieces in the book consist of an outburst of free associations followed by a 
letter from some strangely-named personage or other. The following sample is called "On 
Busting the Sound Barrier": 

the neon dobro's F hole twang & climax from disappointing lyrics of upstreet 
outlaw mattress while pawing visiting trophies & prop up drifter with the bag on 
head in bed next of kin to the naked shade-a tattletale heart & wolf of silver drizzle 
inevitable threatening a womb with the opening of rusty puddle, bottomless, a rude 
awakening & gone frozen with dreams of birthday fog/ in a boxspring of sadly 
without candle sitting & depending on a blemished guide, you do not feel so gross 
important/ success, her nostrils whimper, the elder fables & slain kings & inhale 
manners of furious proportion, exhale them against a glassy mud ... to dread misery 
of watery bandwagons, grotesque & vomiting into the flowers of additional help to 
future treason & telling horrid stories of yesterday's influence/ may these voices 
join with agony & the bells & melt their thousand sonnets now ... while the moth 
ball woman, white, so sweet, shrinks on her radiator, far away & watches in with 
her telescope/ you will sit sick with coldness & in an unenchanted closet ... being 
relieved only by your dark jamaican friend-you will draw a mouth on the lightbulb 
so it can laugh more freely 

forget about where youre bound youre bound for a three octave fantastic hexagram, 
you'll see it. dont worry, you are Not bound to pick wildwood flowers 
.... like i said, youre bound for a three octave titanic tantagram 

your little squirrel, 
Pety, the Wheatstraw 

Stuff and Nonsense 


Dylan is not the only popular singer of the sixties to have had a literary bent. John 
Lennon, when he was in his early twenties, reveled in the nonsensical, and published two 
short books called In His Own Write and A Spaniard in the Works. The books contain 
mostly nonsense poetry, although there are also several prose selections. Two of 
Lennon's poems will serve to illustrate his idiosyncratic style. 


I sat belonely down a tree, 

humbled fat and small. 
A little lady sing to me 

I couldn't see at all. 

I'm looking up and at the sky, 

to find such wondrous voice. 
Puzzly puzzle, wonder why, 

I hear but have no choice. 

"Speak up, come forth, you ravel me", 

I potty menthol shout. 
"I know you hiddy by this tree". 

But still she won't come out. 

Such softly singing lulled me sleep, 

an hour or two or so 
I wakeny slow and took a peep 

and still no lady show. 

Then suddy on a little twig 

I thought I see a sight, 
A tiny little tiny pig, 

that sing with all its might. 

"I thought you were a lady", 

I giggle,-well I may, 
To my suprise the lady, 

got up-and flew away. 

Stuff and Nonsense 



Softly softly, treads the Mungle 
Thinner thorn behaviour street. 
Whorg canteell whorth bee asbin? 
Cam we so all complete, 
With all our faulty bagnose? 

The Mungle pilgriffs far awoy 
Religeorge too thee worled. 
Sam fells on the waysock-side 
And somforbe on a gurled, 
With all her faulty bagnose! 

Our Mungle speaks tonife at eight 

He tell us wop to doo 

And bless us cotten sods again 

Oamnipple to our jew 

(With all their faulty bagnose). 

Bless our gurlished wramfeed 
Me cursed cafe kname 
And bless thee loaf he eating 
With he golden teeth aflame 
Give us OUR faulty bagnose! 

Good Mungle blaith our meathalls 
Woof mebble morn so green the wheel 
Staggaboon undie some grapeload 
To get a little feel 
of my own faulty bagnose. 

Its not OUR faulty bagnose now 
Full lust and dirty hand 
Whitehall the treble Mungle speak 
We might as wealth be band 
Including your faulty bagnose 

Give us thisbe our daily tit 

Good Mungle on yer. travelled 

A goat of many coloureds 

Wiberneth all beneath unravelled 


Stuff and Nonsense 


The first of these is transparent and charming, while the second is somewhat baffling and 
disturbing. What in the world is a bagnose? No clear image comes through. And why are 
all these bagnoses faulty? And does "faulty" have its normal meaning here? Hard to tell. 

* * * 

The idea of "normal meanings" is turned on its head in a recent book of poetry by 
William Benton called just that: Normal Meanings. One section of the book is titled 
"Normal Meanings"; here is an extract from it. 

Escape is, escape 

was, once more, 


as dusky. 

He watches it wrinkle into a school bell. 

It isn't music sometimes, I'm 


Leaves, practically falling 
off and into the air. 

Hills river 
sunset ice-cream 

The buildings. Things 
build up. It 

must be so many 

normal meanings. 

The downstairs lights. Probably I doubt. 

Stuff and Nonsense 


These and other 

The loveliness 
of houses. 

Clarissa is the name of the, bug I just sent somewhere. 

The falseness it abjures has seemed in statements 

we are losing. 

It's hard to say. A note of privilege 
which turns up here in their appearances. 

I drink. 

The cobweb is becoming a strand of 
lamplight, its black heart 


A nice 
by the beer. 

Some may find the amorphousness of this type of poetry amusing or engaging; others 
may find it tiresome, confusing. I personally find it provocative for a while, but then I 
begin to lose interest. 

* * * 

I have somewhat greater interest in the writings of the little-known American rhetorician, 
Y. Serm Clacoxia, who, in the past 25 years or so, has sporadically penned various pieces 
of nonsense poetry and prose. Clacoxia's prose is marked by a certain degree of 
vehemence and fire, although it is sometimes a little hard to figure out exactly what he is 
ranting and raving about. Here follows one of his most lyrical tracts, entitled "The 
Illusions of Alacrity". 

For millennia it has been less than appreciated how futile are the efforts of those 
who seek to sow sobriety in the furrows of trivia. To those of us who have striven 
to clarify what has been left unclear, it has proven a loss. To others who, whilst 
valiantly straddling the fine line that divides arid piquancy from acrid pungency, 
have struggled to set right the many Undeeds and Unsaids of yore, life has shown 
itself as a beast of many colors, a mountain of many flags, a hole of many anchors. 

Stuff and Nonsense 


Who, in fact, were the Outcasts of Episode, if not the champions of clarity? 
Where, indeed, were the witnesses to litany, when their fortress of fecundity was a- 
being stormed by the Ovaltine Monster, that incubus of frozen cheerios and swollen 
bananas? And dare one wonder, with the bassoon of lunacy so shrilly betoning the 
ruined fiddles of flatulism, how it is that doublethink, narcolepsy, and poseurism 
are unthreading themselves across our land like tall, statuesque, half-uneaten yet 
virtuous whippoorwhills? Can it be that a cornflake-catechism has beguiled us into 
an unsworn acceptance of never-takism? 

What sort of entiments are they, that would uncouth a mulebound lout and 
churlishly swirl his burly figure, unfurl and twirl his curly figure, hurl his whirly 
figure, into the circuline vaults of hysteresis? With a drop of sweat unroasting his 
feverish brow, we decry his fate; with the patience of a juggernaut and the 
telemachy of a dozen opossums, we lament his disparity. And summoning all the 
powers that be, we unbow the jelly of our broken dreams, dashing it with the full 
fury of a pleistocene hurdy-gurdy against the lubrified and bulbous nexus of that 
which, having doomed the dinosaurs, seeks the engulfing of all that moves. 

Thus we act; and perhaps action itself is the Anatole's Curlicue of our era. It 
is high time to recognize that action, and action alone, will be the agent that 
transmutes the flowery barrier of unutterability into an arbitrary but sacred iota of 
purposefulness, which cannot help but penetrate into an otherwise nameless and 
universally spaghettified lack of meaning, which smears and beclouds the crab-lit 
hopes of half -beings begging for deliverance from their own private, yet strangely 
tuberculine maelstroms that begat, and begotten were from, a howling sea of 
ribosomal plagiarism. 

This is deliberate nonsense, of course, to be contrasted with the nondeliberate nonsense 
of, say, Dylan Thomas, or the nonsense to be found in crackpot letters written to 
scientists. Crackpot ideas seem to be an inevitable ingredient of any society in which 
serious scientific research is carried out; there is no way to plug all the cracks, so to 
speak. There is no way to ensure that only high-quality science will be done. Fortunately, 
most journals do not publish absolute nonsense or gobbledygook; it is filtered out at a 
very early stage. However, one journal I have come across whose pages are filled with 
utter nonsense-meant seriously-is called Art-Language. To show what I mean, here are 
two short excerpts from the May, 1975 issue. The first one is taken from the beginning of 
an article called "Community Work". It seems, from the table of contents, to have been 
written by three people collectively. The second one is taken from an article called 
"Vulgar and Popular Opinions", and seems to have a single author. 

Dionysus gets a job. (Re: language has got a hold on U. S.) (It's a Whorfian conspiracy!) 

This is hopeless manque ontological alienation which is still dealing with ideas 
about 'discovery' as a function of a metaphysics of categories. Only for researchers 
is the failure of a modal logic industry to v catch- my-experience -the birth of 

Going-on in A-L indexed (somehow) is a thing-in-and-for-(dynamically) 

Stuff and Nonsense 


itself. That we never catch up with the NaturKulturLogik has little to do with the 
'actualizing' sets of the frozen dialogue ... and it's not just a ledger; our problems 
with set-theoretical axiomata are embedded into our praxis as more than just 
historical antecedents ... more than nomological permissibility ... more than 
selective filtration. We still don't recognize ourselves as very fundamental history 

The possibility of a defence of a set, as with 'a decision', is an index-margin 
of a prima facie ersatz principle for action (!). (There is no workable distinction 
between oratio recta and oratio obliqua.) All we are left with is a deontic Drang. 
Think of that as a chain strength possibility of what, eventually, comes out as a 
product (epistemic conditions?) and the product is not a Frankfurt-ish packing-it- 
all-in .... A slogan (?) might be thought of as a free-form comprised of multiple 
structural features occurring in a (partially) given, or negotiable, unit relative to 
others. That is, the slogan is a unit in one sense or another. In. going-on 
(ideologically, perhaps), a slogan is a unitary filler-for-and-of that stretch of surf < 
surf which is in a B X S position ... But there is the critical issue of that 'filler' as a 
reified function of the pusillanimous tittle-tattle of authenticity in its ellipticality (as 
a Das Volk holism) ... (e.g.) 'the Fox' material, passim, falls into that trap in dealing 
with its cultural space as a wantonly dialectical 'region' approaching the solution to 
v the negation of essence' (of homo sapiens, art or what?). 

I am tempted to quote further, to show how the wild quality of the A-L prose just 
goes on and on. But life is short. It is hard for this human being to believe that these 
paragraphs were meant to communicate something to anybody, but the journal appears 
regularly (at least it used to), and can be found on the shelves of reputable art libraries. 
Isn't it time that somebody blew the whistle? The curious thing about Art-Language is 
that the collective that writes it appears to consist of people who are deeply concerned 
with issues that hold much interest for me: the nature of reference, the relationship of 
wholes to parts, the connection of art and reality, the structure of society, the philosophy 
of set theory, the questionable existence of mathematical concepts, and so on. What is 
amazing is how such concepts can be so obscured by language that it is hard to make out 
anything except huge billows of very thick smoke. 

* * * 

An American poet whose work explores ground midway between nonsense and 
sense is Russell Edson. He writes tiny surrealistic vignettes that shed a strange light on 
life. Often he performs strange reversals, as of animate and inanimate beings, or humans 
and animals. His grammar is also oblique, one of his favorite devices being to refer 
repeatedly to something specific with the indefinite article "a", thus disorienting the 
reader. A typical sample of Edson's style is the following, drawn from his book The Clam 

Stuff and Nonsense 


When Science is in the Country 

When science is in the country a cow meows and the moon jumps from limb to 
limb through the trees like a silver ape. 

The cow bow-wows to hear all voice of itself. The grass sinks back into the earth 
looking for its mother. 

A farmer dreamed he harvested the universe, and had a barn full of stars, and a 
herd of clouds fenced in the pasture. 

The farmer awoke to something screaming in the kitchen, which he identified as 
the farmerette. 

Oh my my, cried the farmer, what is to become of what became? 
It's a good piece of bread and a bad farmer man, she cried. 
Oh the devil take the monotony of the field, he screamed. 
Which grows your eating thing, she wailed. 
Which is the hell with me too, he screamed. 
And the farmerette? she screamed. 
And the farmerette, he howled. 

A scientist looked through his magnifying glass in the neighborhood. 

This eerie tale leaves one with a host of unresolved images. That, of course, is 
Edson's intent. And in this regard, Edson's work is quite typical. Most of the nonsense of 
the twentieth century, it seems, has this deliberately upsetting quality to it, reflecting a 
deep malaise. It is utterly different from the nonsense of the preceding centuries. Similar 
trends exist in the other arts, particularly in music, where "classical" composers have lost 
99 percent of their audience by their experimentation with randomness and cacophony. 
However, the spirit of experimentation has also crept into rock music, where electronic 
sounds and unusual rhythms are occasionally heard. The surrealistic, nonsensical spirit 
also pervades the names of popular groups, such as "Iron Butterfly", "Tangerine Dream", 
"Led Zeppelin", "Joy of Cooking", "Human Sexual Response", "Captain Beefheart", 
"Brand X", "Jefferson Starship", "Average White Band", and so on. 

* * * 

Perhaps one of the virtues of nonsense is that it opens our minds to new 
possibilities. The mere juxtaposition of a few arbitrary words can send the mind soaring 
into imaginary worlds. It is as if sense were too mundane, and we need a breather once in 
a while. Perhaps sense is also too confining. Nonsense stresses the incomprehensible face 
of the universe, while sense stresses the comprehensible. Clearly both are important. Zen 
teachings have striven to impart the path to "enlightenment". Although I don't believe that 
such a mystical state exists, I am fascinated by the paths that are offered. Zen itself is 
perhaps the archetypal source of utter nonsense. It seems fitting to 

Stuff and Nonsense 


close this column with two Zen koans taken from the Mumonkan, or "Gateless Gate"-a 
set of koans commented upon by the Zen master Mumon in the thirteenth century. 

Joshu Examines a Monk in Meditation 

Joshu went to a place where a monk had retired to meditate and asked him: "What is, is 
what?" The monk raised his fist. Joshu replied: "Ships cannot remain where the water is 
too shallow." And he left. A few days later Joshu went again to visit the monk and asked 
the same question. The monk answered the same way. Joshu said: "Well given, well 
taken, well killed, well saved." And he bowed to the monk. 

Mumon 's comment. 

The raised fist was the same both times. Why is it Joshu did not admit the first and 
approved the second one? Where is the fault? Whoever answers this knows that Joshu's 
tongue has no bone so he can use it freely. Yet perhaps Joshu is wrong. Or, through that 
monk, he may have discovered his mistake. If anyone thinks that the one's insight 
exceeds the other's, he has no eyes. 

Mumon 's Poem: 

The light of the eyes is as a comet, 
And Zen's activity is as lightning. 
The sword that kills the man 
Is the sword that saves the man. 

Learning is Not the Path 

Nansen said: "Mind is not Buddha. Learning is not the path." 
Mumon 's comment: 

Nansen was getting old and forgot to be ashamed. He spoke out with bad breath and 
exposed the scandal of his own home. However, there are few who appreciate his 

Mumon 's Poem: 

When the sky is clear the sun appears, 
When the earth is parched rain will fall. 
He opened his heart fully and spoke out, 
But it was useless to talk to pigs and fish. 

Stuff and Nonsense 


Post Scriptum: 

I was quite aware that I had omitted some nonsense specialists, such as James 
Joyce, when I wrote this column. But there were reasons. I haven't studied Joyce, and I 
feel there is a lot of complexity there. To call Joyce's strange concoctions "nonsense" is to 
miss the mark. 

Several people wrote in, disappointed that I did not include anything by Walt 
Kelly, the creator of "Pogo". I have to agree that Kelly was a unique writer of ingenious 
and charming nonsense. In fact, I was lucky enough to grow up knowing "The Pogo Song 
Book", a record of some of Kelly's most inspired silly songs, some of them belted out by 
Kelly himself. One that gets across the flavor very well is this one: 


Twirl! Twirl! Twinkle between! 

The tweezers are twist in the twittering twain. 

Twirl! Twirl! Entwiningly twirl 

'Twixt twice twenty twigs passing platitudes plain. 

Plunder the plover and rover rides round. 

Ride all the rungs on the brassily bound, 

Billy, Swirl! Swirl! Swingingly swirl! 

Sweep along swoop along sweetly your swain. 

The poem is catchy and rhythmic, and I cannot read it without hearing the song in my 
head. Few people know that Kelly was a good composer of catchy melodies. But his 
songs, unlike his lyrics, follow very ordinary, "sensible" rules of musical syntax. 

Two other pieces of inspired nonsense that I have run across since writing this 
column are Tom Phillips' A Humument, and Luigi Serafini's Codex Seraphinianus. The 
former, subtitled "A Treated Victorian Novel", was made, by a sort of literary 
cannibalism, from another novel entitled A Human Document, itself written by a little- 
known Victorian novelist named William Hurrell Mallock. Phillips "treated" this novel 
by colorfully and imaginatively overpainting nearly all its pages, blotting out most of the 
text, leaving only a select few words or letters to poke their heads through and make 
cameo appearances now and then. This creation (or revelation?) of hidden messages in 
someone else's text yields some very strange effects. The first page of A Humument reads 
this way (I have slightly modified the two-dimensional placement of the words on the 

The following sing I a book, 
a book of art 
of mind and art 
that which he hid 
reveal I. 

Stuff and Nonsense 


* * * 

Codex Seraphinianus is a much more elaborate work. In fact, it is a highly 
idiosyncratic magnum opus by an Italian architect indulging his sense of fancy to the hilt. 
It consists of two volumes in a completely invented language (including the numbering 
system, which is itself rather esoteric), penned entirely by the author, accompanied by 
thousands of beautifully drawn color pictures of the most fantastic scenes, machines, 
beasts, feasts, and so on. It purports to be a vast encyclopedia of a hypothetical land 
somewhat like the earth, with many creatures resembling people to various degrees, but 
many creatures of unheard-of bizarreness promenading throughout the countryside. 
Serafini has sections on physics, chemistry, mineralogy (including many drawings of 
elaborate gems), geography, botany, zoology, sociology, linguistics, technology, 
architecture, sports (of all sorts), clothing, and so on. The pictures have their own internal 
logic, but to our eyes they are filled with utter non sequiturs. 

A typical example depicts an automobile chassis covered with some huge piece of 
what appears to be melting gum in the shape of a small mountain range. All over the gum 
are small insects, and the wheels of the "car" appear to have melted as well. The 
explanation is all there for anyone to read, if only they can decipher Serafinian. 
Unfortunately, no one knows that language. Fortunately, on another page there is one 
picture of a scholar standing by what is apparently a Rosetta Stone. Unfortunately, the 
only language on it, besides Serafinian itself, is an unknown kind of hieroglyphics. Thus 
the stone is of no help unless you already know Serafinian. Oh, well ... Many of the 
pictures are grotesque and disturbing, but others are extremely beautiful and visionary. 
The inventiveness that it took to come up with all these conceptions of a hypothetical 
land is staggering. 

Some people with whom I have shared this book find it frightening or disturbing 
in some way. It seems to them to glorify entropy, chaos, and incomprehensibility. There 
is very little to fasten onto; everything shifts, shimmers, slips. Yet the book has a kind of 
unearthly beauty and logic to it, qualities pleasing to a different class of people: people 
who are more at ease with free-wheeling fantasy and, in some sense, craziness. I see 
some parallels between musical composition and this kind of invention. Both are abstract, 
both create a mood, both rely largely on style to convey content. 

Music is, in a way, a kind of nonsense that nobody really understands. It 
captivates nearly every human being who can hear and yet, for all that, we still know 
amazingly little about how music works its wonders. But if music is a kind of auditory 
nonsense, that does not prevent there from arising even more extreme brands of auditory 
super-nonsense. The works of Karl-Heinz Stockhausen, Peter Maxwell Davies, Luciano 
Berio, and John Cage will provide a wonderful introduction to that genre, in case some 
reader does not know what I am talking about. Especially if you like the banging of 

Stuff and Nonsense 




(that old MOWPOMA^ l 

/ VlRtlM (THiAS OUR. 1 

ALEA ALEA ALB?.... our 

AF r.O.Li- (A PAGAfJ 
p/NlS-pQN6!) FUTURE 

'THIS ToPsy THRFj TttWfJ/DaWtf 

No PROCESS _,No aawse., 

FIGURE 11-1. One page from David Moser's "Metaculture" (1979) 

Stuff and Nonsense 


garbage-can lids or the sound of gangland murders, their "musical offerings" are sure to 
be right up your alley. 

David Moser is as fascinated with fringe-language as I am, and has explored 
many uncharted regions in that territory. His longest and most adventurous journey 
consisted of the writing and drawing of a roughly 40-page booklet called "Metaculture 
Comics". Inspired by James Joyce, this volume contains some of the most original and 
zany meaningless writings I have ever seen. It is also chock-full of the frame -breaking 
and self-referential devices so beloved by modern graphic designers. A one-page sample 
is shown in Figure II- 1. 

* * * 

The purpose of this column was to emphasize the very fine line that separates the 
meaningful from the meaningless. It is a boundary line that has a great deal to do with the 
nature of human intelligence, because the question of how meaning emerges out of 
meaningless constituents when combined in certain patterned ways is still a perplexing 
one. Computers are good at producing very simple passages that-to us-seem to have 
meaning, and they are excellent at producing passages that are utterly devoid of meaning. 
It will be interesting to see if someday a computer can tread the line and produce an 
artistic exploration of meaning by producing provocative nonsense in the same way as 
these human explorers of the territory have done 

Stuff and Nonsense 



Variations on a Theme 
as the Crux of Creativity 

October, 1982 

You see things; and you say "Why?" 

But I dream things that never were; and say "Why not?" 

-George Bernard Shaw in Back to Methuselah 

When I first heard this beautiful line it made a deep impression on me. It was in 
the spring of 1968, during the presidential campaign, and Robert Kennedy had made 
this line his theme. I thought it was wonderfully poetic, and I assumed he himself had 
dreamt it up. Only many years later did I find out I was quite wrong: Not only had he 
not made it up, but the character who utters it in the Shaw play is the snake in the 
Garden of Eden! How disturbing! Why couldn't it have been the way I thought? 

"To dream things that never were"-this is not just a poetic phrase, but a truth 
about human nature. Even the dullest of us is endowed with this strange abilitylto 
come up with counterfactual worlds and to dream. But why do we have this ability-in 
fact, this proclivity? What sense does it make? And-how can one "see" what is visibly 
not there? 

On my table sits a Rubik's Cube. I look at it and see a 3 X 3 X 3 cube whose 
faces turn. I see-so it seems to me-what is there. But some people looked at that cube 
and saw things that weren't there. They saw cubes with shaved edges, spherical 
"cubes", differently coloured cubes, Magic Dominos, 2x2X2 cubes, 4X4X4 and 
higher-order cubes, skew-twisting cubes, pyramids, octahedra, dodecahedra, 
icosahedra, four-dimensional magic polyhedra. (See figures galore in Chapters 14 and 
15.) And the list is not complete yet! Just you wait! 

How did this come about? How is it that, in looking directly at something solid 
and real on a table, people can see far beyond that solidity and reality —can see an 
"essence", a "core", a "theme" upon which to devise 

Variations on a Theme as the Crux of Creativity 


variations? I must stress that the solid cube itself is not the theme (although it is 
convenient and easy to speak as if it were). In the mind of each person who perceives 
a Rubik's Cube there arises a concept that we could call "Rubik's-Cubicity". It's not 
the same concept in each mind, just as not everyone has the same concept of 
asparagus or of Beethoven. The variations that are spun off by a given cube-inventor 
are variations on that concept. In a discussion of perception and invention, this 
distinction between an object and some mind's concept of the object is simple but 

Now when Eve Rybody comes up with a new variation-let's say the 4 X4 X 4- 
is it as a result of wracking her brain, trying as hard as she can to "go against the 
grain", so as to come up with something original? Does she think to herself, "Golly, 
that Rubik must have really exerted himself to come up with this totally new idea, 
therefore I too must strain my mind to its limits in order to invent something 
original."? No, no, no! A thousand times no. Einstein didn't go around wracking his 
brain, muttering to himself, "How, oh how, can I come up with a Great Idea?" Like 
Einstein (although perhaps on a lesser scale), Eve never needs to ask herself, "Hmm, 
let's see, shall I try to figure out some way to spin off a variation on this object sitting 
here in front of me?" No; she just does what comes naturally. 

The bottom line is that invention is much more like falling off a log than like 
sawing one in two. Despite Thomas Alva Edison's memorable remark, "Genius is 2 
percent inspiration and 98 percent perspiration", we're not all going to become 
geniuses simply by sweating more or resolving to try harder. A mind follows its path 
of least resistance, and it's when it feels easiest that it is most likely being its most 
creative. Or, as Mozart used to say, things should "flow like oil"-and Mozart ought to 
know! Trying harder is not the name of the game; the trick is getting the right concept 
to begin with, so that making variations on it is like taking candy from a baby. 

Uh-oh-now I've given the cat away! So let me boldly state the thesis that I 
shall now elaborate: Making variations on a theme is really the crux of creativity. 

* * * 

On the face of it, this thesis is crazy. How can it possibly be true? Aren't 
variations simply derivative notions, never truly original creations? Isn't the notion of 
a 4 X 4 X 4 cube simply a result of "twiddling a knob" on the concept of Rubik's- 
Cubicity? You merely twist the knob from its "factory setting" of 3 to the new setting 
of 4, and presto-you've got it! An inner voice protests:' "That's just too easy. That's 
certainly not where Rubik's Cube, the Rite of Spring, relativity, or Romeo and Juliet 
came from, is it? Isn't there a v magic spark' that leaps across a gap when a Rubik or a 
Stravinsky or an Einstein or a Shakespeare comes up with a great idea, something that 
is patently lacking when an Eve Rybody merely twiddles a knob on an already- 
existing notion?" 

Well, of course, inventing the notion of a 4 X 4 X 4 cube is far less deep 

Variations on a Theme as the Crux of Creativity 


than coming up with special or general relativity. I'd be the last to deny that. But that 
doesn't mean that the underlying mental processes are necessarily based on totally 
different principles. Of course, there is a boring sense in which the underlying mental 
processes in your brain, my brain, Eve's brain, and Einstein's brain are all "the same"- 
namely, they all depend on neural hardware. But it is not at such a microscopic, such 
a biological level that I mean it when I suggest that the underlying mental processes in 
different brains are somehow the same. What I mean is that there are mechanisms, 
processes, call them what you will, that can be described functionally, without 
reference to the neural substrate that enables them to take place in brains. 

Thus, a notion like "twiddling a knob on a concept" bears no relation to the 
activities of neurons in the brain-or at least no obvious relation. Well then, is there 
any reality to it, or is it just a metaphor? If someday we at last come to understand the 
brain, will we then be confident that we're on solid ground when we speak of a brain 
literally containing concepts ? Or will such statements forever remain shaky and 
metaphorical fawns de parley, compared to such hard-science facts as "At the back of 
each human brain there is a cerebellum"? Well, until words like "concept" have 
become terms as scientifically legitimate as, say, "neuron" or "cerebellum", we will 
not have come anywhere close to understanding the brain-at least not in my book. 

However, it must be admitted that at present, words like "concept" are only 
metaphorical. They are protoscientific terms awaiting explication. But this is a very 
good reason to try to flesh them out as much as possible, to try to see what the 
metaphor of "twiddling knobs on a concept" involves. Pinning down the meaning of 
such a metaphor will help us know much more clearly what we would ideally want 
from a "hard-science" explanation of the brain. 

This metaphor makes your imagination conjure up a vision of a tangible thing 
called a "concept" that literally has a set of knobs on it, just waiting to be twiddled. 
What I picture in my mind's eye is something that, instead of being built out of 
millions of neurons, is more like a metallic "black box" with a panel on it, containing 
a row of plastic knobs with little pointers on them, telling you what each one's setting 

Just to make this image more concrete, let me describe a genuine example of 
such a black box with knobs. Back in the old days of player pianos, good pianists 
made piano rolls of all sorts of wonderful music. Nowadays, you can buy phonograph 
records of those rolls being played back on player pianos -but you can do better than 
that. Many of the best rolls made on a special kind of piano called a Vorsetzer have 
been converted into digital cassette tapes-not to be played on tape recorders, but on 
pianos specially equipped with a device called a "Pianocorder". This "reads" the 
magnetic tape and converts it into instructions to the keyboard and pedals, so that 
your piano then plays the piece. Each Pianocorder has a black box on the front of 
which is a control panel with a row of three knobs (tempo, pianissimo, and fortissimo) 

Variations on a Theme as the Crux of Creativity 


and one switch ("soft pedal"). By twisting the tempo knob you can make 
Rachmaninoff speed up, by twiddling the pianissimo and fortissimo knobs you can 
make Horowitz play more softly or Rubinstein more loudly. It's too bad, there's not a 
knob labelled "pianist" so that you can select who plays. After all, it would be 
interesting to change Horowitzes in midstream. 

* * * 

This device takes us one step toward realizing a dream of the unique Canadian 
pianist Glenn Gould. Gould is very tuned in to the electronic age, and for years has 
been advocating using computers to allow people to control the music they hear. You 
begin with an ordinary recording of, say, Glenn Gould himself playing a concerto by 
Mozart. But this is merely raw data for you to tamper with. On your space-age record 
player, you have a bunch of knobs that allow you to slow the music down or to speed 
it up ad libitum, to control the volume of all the separate sections of the orchestra, 
even to correct for high notes played too flat by the violinists! In effect, you become 
the conductor, with knobs to control every aspect of the performance, dynamically. 
The fact that it was originally Glenn Gould at the piano is, by the time you're done 
with it, irrelevant. By now you've totally taken over and made it your very own 
performance! Presumably, such systems would eventually evolve to the point where 
you could start with the mere written score, dispensing entirely with the acoustic 
recording stage. 

But why not carry this further, then? If we are allowing ourselves to fantasize, 
why not go as far as we can imagine? Why should our "raw data" be limited to the 
finite universe of already-composed pieces? Why could there not be a knob to control 
the mood of the composition, another to control the composer whose style it is to be 
written in? This way, we could get a new piece by our favorite composer in any 
desired mood. But really, this is too conservative. Why should we be limited to the 
finite universe of already-born composers? Why could there not be a knob to allow us 
to interpolate between composers, thus making it possible for us to tune our music- 
making machine to an even mixture of Johann Sebastian Bach, Giuseppe Verdi, and 
John Philip Sousa (ugh!), or a position halfway between Schubert and the Sex Pistols 
(super-ugh!)? And why stop at interpolation? Why not extrapolate beyond a given 
composer? For instance, I might want to hear a piece by "the composer who is to 
Ravel as Ravel is to Chopin". The machine would merely need to calculate the ratios 
of its knob settings 'for Ravel and Chopin, and then multiply the Ravel- settings by 
those same ratios to come up with a super-Ravel. 

It's no trickier than solving any old analogy problem-you know, simple 
problems like this: 

What is to a triangle as a triangle is to a square? 

What is to a honeycomb as a knight's move is to a city grid? 

Variations on a Theme as the Crux of Creativity 


What is to four dimensions as the "impossible triangle" illusion is to three? 
What is to Greece as the Falkland Islands are to Britain? What is to visual art as 
fugues are to music? 
What is to a waterbed as ice is to water? 

What is to the United States as the Eiffel Tower is to France? What is to 
German as Shakespeare's plays are to English? What is to English as simplified 
characters are to Chinese? What is to 1-2-3-4-4-3-2-1 as 4 is to 1-2-3-4-5-5-4-3- 
2-1? What is to pqc as abc is to aqc? 

The truth is, of course, that analogy problems are staunchly resistant to 
mechanization. The knobs on most concepts are not so apparent as to allow us to just 
read their settings right off. The examples above simply carried a sensible thought to a 
ludicrous extreme. However, it is still worthwhile to look seriously at the idea that a 
concept can be considered as a "knobbed machine" whose knobs can be twiddled to 
produce a bewildering array of variations. 

* * * 

The Rubik's-Cube concept, with its "order" knob set at 3, produces an ordinary 
3X3X3 cube-and with that knob set at 4, a 4 X 4 X 4. Come to think of it, doesn't 
there have to be a separate knob for each dimension, so that you can twiddle each one 
independently of the others? After all, not all variations have to be cubical. The Magic 
Domino is 3 X 3 X 2. So if we agree that there are three knobs defining the shape, 
then in the original cube they all just accidentally happened to have the same setting. 
Now given these three knobs, we can use our concept-our knobbed machine-to 
generate such mental objects as a 7 X 7 X 7 Rubik's Cube, a 2 X 2 X 8 Magic 
Domino, even a 3 X 5 X 9 Rubik's Magic Brick (or, if you'll pardon me, a "Rubrick"). 
But wait a minute-if there really are just three knobs, then we're locked into three 
dimensions! Obviously we don't want that. So let's add a fourth knob to control the 
length in the fourth dimension. With this knob, we can now make a four-dimensional 
2X3X5X7 Rubrick, as well as any Rubik's Tesseract that we might want. But 
needless to say, once we've gone through the gate from three dimensions to four, 
certainly we should expect to be able to go further. For any n, we could imagine n- 
dimensional Rubik's ot4ects-for example, a 2 X 3 X 4 X 5 X 6 X 7 X 8 Hyper- 
Rubrick. But now something peculiar has happened. We must now conceive of our 
machine— our concept— as having a potentially unlimited number of knobs on it (one 
for each dimension in n-dimensional space). If n is set to 3, there need only be 3 more 
knobs. But if n is 100, we need 100 extra knobs! 

'No real machine has a variable number of knobs. Now this may sound like 
a somewhat trivial observation. However, it leads into some tricky waters. 

Variations on a Theme as the Crux of Creativity 


The point is that, if we wis* to keep on using the metaphor of a concept as a machine 
with knobs on it, we have to stretch the very concept of "knob". New knobs must be 
able to sprout, depending on the settings of other knobs. Or you can think of it this 
way, if you wish: on each concept, there are potentially an infinite number of knobs, 
and at any moment, some new knobs may get revealed as a consequence of the 
settings of other knobs. 

I'm not sure I like that view, however. It's too cut and dried, too closed and 
predetermined for my tastes. I am more in favour of a view that says that the knobs on 
any one concept depend on the set of concepts that happen to be awake 
simultaneously in the mind of the person. This way, new knobs can spring into 
existence seemingly out of nowhere; they don't all have to be present from the outset 
in the isolated concept. If we go back to Rubik, this would mean that his concept of 
Rubik's Cube didn't (and still doesn't) explicitly-or even implicitly-contain all the 
possible variations that people may come up with. Rubik anticipated, and even 
designed, many of the objects that have subsequently appeared and that we perceive 
as "variations on a theme"-but certainly, his mind did not exhaust that fertile theme. 
Once the concept entered the public domain, it started migrating and developing in 
ways that Rubik could never have anticipated. 

* * * 

There is a way that concepts have of "slipping" from one into another, 
following a quite unpredictable path. Careful observation and theorizing about such 
slippages affords us perhaps our best chance to probe deeply into the hidden murk of 
our conceptual networks. An example of such a slip is furnished to us whenever we 
make a typo or a grammatical mistake, utter a malapropism ("She's just back from a 
one-year stench at Berkeley") or a malaphor (a novel phrase concocted unconsciously 
from bits and pieces of other phrases, such as "He's such an easy-go-lucky fellow" or 
"Uh-oh, now I've given the cat away"), or confuse two concepts at a deeply semantic 
level (e.g., saying "Tuesday" but meaning "February", or saying "midnight" in lieu of 
"zero degrees"). These types of slip are totally accidental and come straight out of our 
unconscious mind. 

However, sometimes a slippage can be nonaccidental yet still come from the 
unconscious mind. By "nonaccidental" here, I do not mean to imply that the slip is 
deliberate. It's not that we say to ourselves, "I think .1 shall now slip from one 
concept into a variation of it"; indeed, that kind of deliberate, conscious slippage is 
most often quite uninspired and infertile. "How to Think" and "How to Be Creative" 
books-even very thoughtful ones such as George P51ya's How to Solve It-axe, for that 
reason, of little use to the would-be genius. 

Strange though it may sound, nondeliberate yet nonaccidental slippage 
permeates our mental processes, and is the very crux of fluid thought. That is my 
firmly held conviction. This subconscious manufacture of "subjunctive variations on a 

Variations on a Theme as the Crux of Creativity 


theme" is something that goes on day and night in each of us, usually without our 
slightest awareness of it. It is one of those things that, like air or gravity or three- 
dimensionality, tend to elude our perception because they define the very fabric of our 

To make this concrete, let me contrast an example of "deliberate" slippage 
with an example of "nondeliberate but nonaccidental" slippage. Imagine that one 
summer evening you and Eve Rybody have just walked into a surprisingly crowded 
coffeehouse. Now go ahead and manufacture a few variants on that scene, in whatever 
ways you want. What kinds of things do you come up with when you deliberately 
"slip" this scene into hypothetical variants of itself? 

If you're like most people, you'll come up with some pretty obvious slippages, 
made by moving along what seem to be the most obvious "axes of slippability". 
Typical examples are: 

I could have come with Ann Yone instead of Eve Rybody. 

We could have gone to a pancake house instead of a coffeehouse. 

The coffeehouse could have been nearly empty instead of full. 

It could have been a winter's evening instead of a summer's evening. 

Now contrast your variations with one that I overheard one evening this past 
summer in a very crowded coffeehouse, when a man walked in with a woman. He 
said to her, "I'm sure glad I'm not a waitress here tonight!" This is a perfect example 
of a subjunctive variation on the given theme-but unlike yours, this one was made 
without external prompting, and it was made for the purposes of communication to 
someone. The list above looks positively mundane next to this casually tossed-off 
remark. And the remark was not considered to be particularly clever or ingenious by 
his companion. She merely agreed with the thought by saying "Yeah." It caught my 
attention not so much because I thought it was clever, but mostly because I am always 
on the lookout for interesting examples of slippability. 

I found this example not just mildly interesting, but highly provocative. If you 
try to analyze it, it would appear at first glance to force you as listener to imagine a 
sex-change operation performed in world record time. But when you simply 
understand the remark, you see that in actuality, there was no intention in the 
speaker's mind of bringing up such a bizarre image. His remark was much more 
figurative, much more abstract. It was based on an instantaneous perception of the 
situation, a sort of "There-but-for-thegrace-of-God-go-I" feeling, which induces a 
quick flash to the effect of "Simply because I am human, I can place myself in the 
shoes of that harried waitress-therefore I could have been that waitress." Logical or 
not, this is the way our thoughts go. 

So when you look carefully, you see that this particular thought has practically 
nothing to do with the speaker, or even with the waitresses he sees. It's just his flip 
way of saying, "Hmm, it sure is busy here tonight." And 

Variations on a Theme as the Crux of Creativity 


that's of course why nobody really is thrown for a loop by such a remark Yet it was 
stated in such a way that it invites you to perform a "light" mapping of him onto a 
waitress, just barely noticing (if at all) that there is a sex difference. What an 
amazingly subtle thought process is involved here! 

And what is even more amazing (and frustrating) to me is how hard it is to 
point out to people how amazing it is! People find it very hard indeed to see what's 
amazing about the ordinary behavior of people. They cannot quite imagine how it 
might have been otherwise. It is very hard to slip mentally into a world in which 
people would not think by slipping mentally into other worlds-very hard to make a 
counterfactual world in which counterfactuals were not a key ingredient of thought. 

Another quick example: I was having a conversation with someone who told 
me he came from Whiting, Indiana. Since I didn't know where that was, he explained, 
"Whiting is very near Chicago-in fact, it would be in Illinois if it weren't for the state 
line." Like the earlier one, this remark was dropped casually; it was certainly not an 
effort to be witty. He didn't chuckle, nor did I. I simply flashed a quick smile, 
signaling my understanding of his meaning, and then we went on. But try to analyze 
what this remark means! On a logical level, it is somewhat like a tautology. Of course 
Whiting would be in Illinois if the Illinois state line made it be so-but if that's all he 
meant, it is an empty remark, because it holds just as well for cities thousands of miles 
from Chicago. But clearly, the notion he had in mind was that there is an accidental 
quality to where boundary lines fall, a notion that there are counterfactual worlds 
"close" to ours, worlds in which the Illinois-Indiana line had gotten placed a couple of 
miles further east, and so on. And his remark tacitly assumed that he and I shared such 
intuitions about the impermanence and arbitrariness of geographical boundary lines, 
intuitions about how state lines could "slip". 

Remarks like this betray the hidden "fault lines of the mind"; they show which 
things are solid and which things can slip. And yet, they also reveal that nothing is 
reliably unslippable. Context contributes an unexpected quality to the knobs that are 
perceived on a given concept. The knobs are not displayed in a nice, neat little control 
panel, forevermore unchangeable. Instead, changing the context is like taking a tour 
around the concept, and as you get to see it from various angl s, more and more of its 
knobs are revealed. Some people get to be good at perceiving fresh new knobs on 
concepts where others thought there were none, just as some people get to be good at 
perceiving mushrooms in a forest where others see none, even when they stare 

* * * 

It may still be tempting to think that for each well-defined concept, there must 
be an "ultimate" or "definitive" set of knobs such that the abstract space traced out by 
all possible combinations of the knobs yields all possible 

Variations on a Theme as the Crux of Creativity 


instantiations of the concept. A case in point is the concept of the letter V A'. The 
typographically naive might think that there are four or five knobs to twiddle here, 
and that's all. However, the more you delve into letter forms, the more elusive any 
attempt to parametrize them mathematically becomes. .One of the most valiant efforts 
at "knobbifying the alphabet" has been the letterform-defining system called 
"Metafont", developed at Stanford by the well-known computer scientist Donald 

Knuth's purpose is not to give the ultimate parametrization of the letters of the 
alphabet (indeed, I suspect that he would be the first to laugh at the very notion), but 
to allow a user to make "knobbed letters "-we could call them letter schemas. This 
means that you can choose for yourself what the variable aspects of a letter are, and 
then, with Metafont's aid, you can easily construct knobs that allow those aspects to 
vary. This includes just about anything you can think of: stroke lengths, widenings or 
taperings of strokes, curvatures, the presence or absence of serifs, and so on. The full 
power of the computer is then at your disposal; you can twiddle away to your heart's 
desire, and the computer will generate all the products your knob-settings define. 

Going further than letters in isolation, Knuth then allowed letters to share 
parameters-that is, a single "master knob" can control a feature common to a group of 
related letters. This way, although there may be hundreds of knobs when you count 
the knobs on all the control panels of all the letters of the alphabet, there will be a far 
smaller number of master knobs, and they will have a deeper and more pervasive 
influence on the whole alphabet. What happens, in effect, is that by twiddling the 
master knobs alone, you have a way of drifting smoothly through a space of 

Perhaps Knuth's greatest virtuoso trick yet with Metafont is what he did with 
Psalm 23, which in this version consists of 593 characters. (See Figure 12-1.) Knuth 
had defined a full set of letters that shared 28 "master knobs". He began his printed 
version of the psalm with all 28 master knobs at their leftmost settings. Then, letter by 
letter, he inched his way toward the rightmost settings, turning each knob 1/592 of the 
way, so that by the time he had reached the final letter, the extreme opposite end of 
the spectrum had been attained. In one sense, every letter in this version of the psalm 
is printed in a different typeface! And yet the transition is so smooth as to be locally 
undetectable even to a finely trained eye. This example is drawn from Knuth's 
inspiring article in Visible Language entitled "The Concept of a Meta-Font". 

One of Knuth's main theses is that with computers, we now are'in the position 
of being able to describe not just a thing in itself, but how that thing would vary. 
Metafont epitomizes this thesis. In a sense, the computer, rather than simply blindly 
reproducing fixed letter shapes, has a crude "understanding" of what it. is drawing, 
created by the designer who "knobbified" the letters. And yet, one should be careful 
not to fall under the illusion, so easily created by Metafont's extraordinary power, that 

Variations on a Theme as the Crux of Creativity 


e LORD is my shepherd; 
I shall not want. 
He maketh me to lie down 

in green pastures: 
he leadeth me 

beside the still waters. 
He restoreth my soul: 
he leadeth me 

in the paths of righteousness 
for his name's sake. 
Yea, though I walk through the valley 
of the shadow of death, 
I will fear no evil: 
for thou art with me; 

thy rod and thy staff 
they comfort me. 
Thou preparest a table before me 

in the presence of mine enemies: 
thou anointest my head with oil, 
my cup runneth over. 
Surely goodness and mercy 
shall follow me 
all the days of my life: 
and I will dwell 

in the house of the LORD 
for ever. 

FIGURE 12-1. Psalm 23, printed by Donald Knuth's ME TA FONT program. It starts 
out in an old-fashioned, highly serifed typeface and gradually modulates into a 
modernistic, sans-serif typeface. Each step, imperceptible on its own, is accomplished 
by making a tiny shift in 28 parameters governing the overall appearance of the 
computerized alphabet. 

28 master knobs-or any finite set of knobs-might actually span the entire space of all 
possible typefaces. This is about as far from the truth as would be the claim that the 
space of all possible face types (see Figure 12-2) could be captured in a computer 
program with 28 knobs. 

Even the space of all versions of the letter V A' is only barely explored when 
you twiddle all the knobs in Knuth's representation of v A'-not just the 28 master knobs 
it shares with other letters, but the many "private" knobs it has as well. Even a 
thousand knobs would not suffice to cover the variety of letter 'A's that people 
recognize easily. Some evidence of the richness of the 'A' concept is shown in Figure 
12-3. These v A's are all taken from real typefaces in the 1982 Letraset Catalogue. To 
illustrate that such richness is not a quirk of our writing system, I have assembled, in 
Figure 12-4, a similar collection of variants of the Chinese character meaning "black" 

Variations on a Theme as the Crux of Creativity 


FIGURE 12-2. Sixteen highly diverse human fates, culled from Fedmco Fettini's extensive 
library of still photos of people. [From FelUni's Faces, by Christian Stitch.] 

FIGURE 12-3. 56 'A s in different styles, alt drawn from a recent Letraset catalogue. The. 
names of their respective typefaces are given on the fating page. To native readers of the lAixn 
alphabet, tl U an almost immediate visual experience to recognize how any one of them is an 'A \ 
No conscious processing is required. A couple of these seem far-fetched, but the rest are quite 
obvious. The most canonical of all 56 is probably Vnivers (D-3). Note that no single feature, 
such as having a pointed top or a horizontal crossbar ( or even a crossbar at all!) ss reliable. Even 
being open at the bottom is unreliable. What is going on here' (Compare this figure to Figure 

Variations on a Theme as the Crux of Creativity 



A B C I) E F G 





Gins mil btirv 

ftik k!m 





Furura Black 



Old English 


Fdtk Avi-rnit' 

RftTllll 15 A: I 



Univera 67 



Algeria n 




Block Up 

It. miiIh k- 



( [nl it nifurUi 









Mo tier 
( }tii bra 



Pluto Oudinr 










Sinai oa 








Yagi Link 

Variations on a Theme as the Crux of Creativity 

FIGURE 1 2-4 . 2} ' 'hei "s (the Chinese character meaning ' 'black ' ') in different styles, drawn 
from a variety of "artistic-character catalogues". To native readers of Chinese, it ts an almost 
immediate visual experience to recognize how any one of them is a "hit". No conscious processing 
is requited. tVone of these is as far-fetched as the extreme 'A 's in the previous figure. For 
non-readers of Chinese (or ei>en non-native readers of Chinese) it requires some conscious process- 
ing to "unmask" many of these. The most canonical of all 23 are: the one enclosed in dotted lines 
in the upper left corner, and the framed one in the very center (ironically not black, but white). 
Tiy to see hew the various features of the "Platonic" character are implanted in these mortal 
incarnations. One learns here to appreciate the French mymg Plus (;a change, plus e'est la 
meme chose. 

(pronounced "hei", rhyming with V). I found them in some Chinese language 
graphic-design catalogues. This figure is a real eye-opener for people who don't read 
Chinese. They usually ask incredulously, "You mean Chinese people can easily tell 
that these are all the same character?!" Of course they can, and in a split second just 
as we can for the matrix of A's. 

There is a crucial distinction to be drawn here. A machine with one off on switch (the 
most trivial kind of knob) for each square in a 500 X 500 grid will certainly define 
any of the 'A's shown-but it will not exclude v B's or hei 's or pictures of your 
grandmother or of trolley cars. It is another matter altogether to define a set of knobs 
whose twiddling covers all the v A's, showing all the interpolations between them (as 
well as extrapolations in all possible directions)-yet never leads you out of the space 
of recognizable 

Variations on a Theme as the Crux of Creativity 


'A's. This is far trickier! Similarly, it is a nearly trivial project to write a computer 
program that in theory writes all possible sequences and combinations of tones in all 
possible rhythmic patterns-but that is a far cry from writing a program that produces 
only pieces in the style of Bach. Putting on the constraints makes the program 
unutterably more complex ! 

What Metafont gives you, rather than the full space of all typefaces or 'A's, is a 
subspace, and such a tightly related subspace that it is perhaps best to call it a family. 
Nobody would be able to predict butterflies from having studied ants and wasps and 
beetles. Certainly no currently imaginable program would, anyway. Likewise, nobody 
would be able to predict the full magnitude of the concept of 'A', from seeing only the 
family traced out by the finite number of knobs in any realistic Metafont program for 
V A'. 

The next stage beyond Metafont will be a program that, on its own, can extract 
a set of knobs from a set of given input letters. This, however, is a program for the 
distant future. At present, it takes a highly trained and perceptive typeface designer 
months to convert a set of letterforms into Metafont programs with knobs flexible 
enough to warrant the trouble taken. It would be relatively easy to do it in some crude 
mechanical way, but what one wants is for stylistic unity to be preserved even as the 
master knobs are twiddled-and therefore, the task of automating the production of 
Metafont programs amounts to automation of artistic perception. It's not just around 
the corner. 

* * * 

There is a curious book called One Book Five Ways, published in 1978 by 
William Kaufmann, Inc. It came about this way. As an educational experiment in 
comparative publishing procedures, a manuscript on indoor gardening was sent 
around to five different university presses, and they all cooperated in coming up with 
full publication versions of the book, which turned out to be stunningly different at all 
conceivable levels. William Kaufmann had the bright idea of publishing pieces of the 
various versions side by side; what resulted was this elegant "metabook". It brings 
home the meaning of the old saying that there's more than one way to skin a cat. 

Making this book was an extravagant foray into "possible worlds", the kind of 
thing that seems very hard to do. One of Knuth's points, however, is that as computers 
become more sophisticated and common, the notion of skinning a cat in nine different 
ways will gradually become less extravagant. Once your "cat" has been represented 
inside a powerful computer program, it is no longer just one cat; it has become, 
instead, a "cat-schema"-a mold for many cats at once, and you can skin them all 
differently (or at least until the cat-schema runs out of lives). 

Text formatters and computer typesetting present us easily with many 
alternative versions of a piece of text. Metafont shows us how letterforms can glide 
into alternative versions of themselves. It is now up to us to 

Variations on a Theme as the Crux of Creativity 


FIGURE 12-5. In (a), a stylized implicosphere. In (b) through (d), various degrees of 
overlap of two implicospheres are portrayed. Too much overlap (b) leads to mushy, 
sloppy thought, while too little overlap (d) leads to sparse, dull thought. The ideal 
amount of overlap and autonomy (c) leads to creative, insightful thought. - 

In (e), a related and charming geometrical problem called "Mrs. Miniver's 
problem "is shown. The idea is to determine the conditions under which the overlap of 
two circles ( representing two people ) has the same area as each of the two crescents 
formed. Mrs. Miniver wishes thereby to symbolize her vision of the ideal romance. 
The ideal overlap of course symbolizes how much two lovers ideally have in common. 

Variations on a Theme as the Crux of Creativity 


continue this trend of extending our abilities to see further into the space of 
possibilities surrounding what is. We should use the power of computers to aid us in 
seeing the full concept-the implicit "sphere of hypothetical variations"-surrounding 
any static, frozen perception. 

I have concocted a playful name for this imaginary sphere: I call it the 
implicosphere, which stands for implicit counterfactual sphere, referring to things that 
never were but that we cannot help seeing anyway. (The word can also be taken as 
referring to the sphere of implications surrounding any given idea. A visual 
representation of an implicosphere is shown in Figure 12-5.) If we wish to enlist 
computers as our partners in this venture of inventing variations on a theme, which is 
to say, turning implicospheres into "explicospheres", we have to give them the ability 
to spot knobs themselves, not just to accept knobs that we humans have spotted. To 
do this we will have to look deeply into the nature of "slippability", into the fine- 
grained structure of those networks of concepts in human minds. 

* * * 

One way to imagine how slippability might be realized in the mind is to 
suppose that each new concept begins life as a compound of previous concepts, and 
that from the slippability of those concepts, it inherits a certain amount of slippability. 
That is, since any of its constituents can slip in various ways, this induces modes of 
slippage in the whole. Generally, letting a constituent concept slip in its simplest ways 
is enough, since when more than one of these is done at a time, that can already create 
many unexpected effects. Gradually, as the space of possibilities of the new concept- 
the implicosphere-is traced out, the most common and useful of those slippages 
become more closely and directly associated with the new concept itself, rather than 
having to be derived over and over from its constituents. This way, the new concept's 
implicosphere becomes more and more explicitly explored, and eventually the new 
concept becomes old and reaches the point where it too can be used as a constituent of 
fresh new young concepts. 

Some examples of this sort of thing were presented in my column for 
September, 1981 (Chapter 23). Now although September is almost October 

Variations on a Theme as the Crux of Creativity 


and 1981 is almost 1982, that doesn't quite mean that you have those examples at your 
mind's fingertips, or on the tip of your mind's tongue. So let me present a few more 
examples of slippage of a new notion based on slipping some of its parts in their 
simplest ways. The notion I have chosen is that of yourself sitting there, reading this 
very column at this very moment. Here are some elements of the implicosphere of 
that concept: 

You are almost reading the September 1981 issue of Scientific American. 

You are almost reading a piece by Richard Hofstadter, the historian. 

You are almost reading a column by Martin Gardner. 

Your identical twin is almost reading this column. 

You are almost reading this column in French. 

You are almost reading Godel, Escher, Bach. 

You are almost reading a letter from me. 

You are almost writing this column. 

You are almost hearing my voice. 

I am almost talking to you. 

You are almost ready to throw this copy of Mad magazine out in disgust. 

By now, the original concept is almost lost in a silly sea of "almost" 
variations-but it has been enriched by this exploration, and when you come back to it, 
it will have been that much more reified as a stand-alone concept, a single entity 
rather than a compound entity. After a while, under the proper triggering 
circumstances, this very example may be retrieved from memory as naturally and 
effortlessly as the concept of "fish" is. 

This is an important idea: the test of whether a concept has really come into its 
own, the test of its genuine mental existence, is its retriev ability by that process of 
unconscious recall. That's what lets you know that it has been firmly planted in the 
soil of your mind. It is not whether that concept appears to be "atomic", in the sense 
that you have a single word to express it by. That is far too superficial. 

Here is an example to illustrate why. A friend told me recently that the 
Encyclopaedia Britannica's first edition (1768-71) consisted of three volumes: 
Volume 'I: "A-B"; Volume II: "C-L", and then Volume III: the rest of the alphabet. In 
that edition, 511 pages were devoted to topics beginning with V A', while the last 
volume had 753 pages altogether! (I guess that in those days there weren't yet many 
interesting things around that began with letters between V M' and 'Z'.) Hearing this 
amusing fact instantaneously triggered the retrieval of the memory, implanted in me 
years and years ago under totally unremembered circumstances, of how records used 
to be made, back in the days when there was no magnetic tape and the master disk 
was actually cut during the live performance. The performers would be playing along 
and all of a sudden the recording engineer would notice that there wasn't much room 
left on the plate, so the performers would be given a signal to hurry up, and as a 
result, the tempo would be faster and faster 

Variations on a Theme as the Crux of Creativity 


the further toward the center the needle came. I think it is obvious why the - one 
triggered retrieval of the other. And yet-is it obvious? 

On the surface, these two concepts are completely unrelated. One concerns 
printed matter, books, the alphabet, and so on, while the other concerns plastic disks, 
sounds, performers, recording techniques, and so on. However, at some deeper 
conceptual level, these really are the same idea. There is just one idea here, and this 
idea I call a conceptual skeleton. Try to verbalize it. It's certainly not just one word. It 
will take you a while. And when you do come up with a phrase, chances are it will be 
awkward and stilted-and still not quite right! 

Both of the cited instances of this conceptual skeleton-in itself nameless, 
majestically nonverbalizable-are floating about in the implicosphere that surrounds it, 
along with numerous other examples that I am unaware of, not yet having twiddled 
enough knobs on that concept. I don't yet even know which knobs it has ! But I may 
eventually find out. The point is that the concept itself has been reed-this much is 
proven by the fact that it acts as a point of immediate reference; that my memory 
mechanisms are capable of using it as an "address" (a key for retrieval) under the 
proper circumstances. The vast majority of our concepts are wordless in this way, 
although we can certainly make stabs at verbalizing them when we need to. 

* * * 

Early in this column, I stated a thesis: that the crux of creativity resides in the 
ability to manufacture variations on a theme. I hope now to have sufficiently fleshed 
out this thesis that you understand the full richness of what I meant when I said 
"variations on a theme". The notion encompasses knobs, parameters, slippability, 
counterfactual conditionals, subjunctives, "almosf'-situations, implicospheres, 
conceptual skeletons, mental reification, memory retrieval-and more. 

The question may persist in your mind: Aren't variations on a theme somehow 
trivial, compared to the invention of the theme itself? This leads one back to that 
seductive notion that Einstein and other geniuses are "cut from a different cloth" from 
ordinary mortals, or at least that certain cognitive acts done by them involve 
principles that transcend the everyday ones. This is something I do not believe at all. 
If you look at the history of science, for instance, you will see that every idea is built 
upon a thousand related ideas. Careful analysis leads one to see that what we choose 
to call a new theme is itself always some sort of variation, on a deep level, of previous 
themes. The trick is to be able to see the deeply hidden knobs! 

Newton said that if he had seen further than others, it was only by standing on 
the shoulders of giants. Too often, however, we simply indulge in wishful thinking 
when we imagine that the genesis of a clever or beautiful idea was somehow due to 
unanalyzable, magical, transcendent insight rather than to 

Variations on a Theme as the Crux of Creativity 


any mechanisms-as if all mechanisms by their very nature were necessarily shallow 
and mundane. 

My own mental image of the creative process involves viewing the 
organization of a mind as consisting of thousands, perhaps millions, of overlapping 
and intermingling implicospheres, at the center of each of which is a conceptual 
skeleton. The implicosphere is a flickering, ephemeral thing, a bit like a swarm of 
gnats around a gas-station light on a hot summer's night, perhaps more like an 
electron cloud, with its quantummechanical elusiveness, about a nucleus, blurring out 
and dying off the further removed from the core it is (Figure 12-5). If you have 
studied quantum chemistry, you know that the fluid nature of chemical bonds can best 
be understood as a direct consequence of the curious quantummechanical overlap of 
electronic wave functions in space, wave functions belonging to electrons orbiting 
neighboring nuclei. In a metaphorically similar way, it seems to me, the crazy and 
unexpected associations that allow creative insights to pop seemingly out of nowhere 
may well be consequences of a similar chemistry of concepts with its own special 
types of "bonds" that emerge out of an underlying "neuron mechanics". 

Novelist Arthur Koestler has long been a champion of a mystical view of 
human creativity, advocating occult views of the mind while at the same time 
eloquently and objectively describing its workings. In his book The Act of Creation, 
he presents a theory of creativity whose key concept he calls "bisociation"-the 
simultaneous activation and interaction of two previously unconnected concepts. This 
view emphasizes the comingtogether of two concepts, while bypassing discussion of 
the internal structure of a single concept. In Koestler's view, something new can 
happen when two concepts "collide" and fuse- something not present in the concepts 
themselves. This is in keeping with Koestler's philosophy that wholes are somehow 
greater than the sum of their parts. 

By contrast, I have been emphasizing the idea of the internal structure of one 
concept. In my view, the way that concepts can bond together and form conceptual 
molecules on all levels of complexity is a consequence of their internal structure. 
What results from a bond may surprise us, but it will nonetheless always have been 
completely determined by the concepts involved in the fusion, if only we could 
understand how they are structured. Thus the crux of the matter is the internal 
structure of a single concept and how it "reaches out" toward things it is not. The crux 
is not some magical, mysterious process that occurs when two indivisible concepts 
collide; it is a consequence of the divisibility of concepts into subconceptual elements. 
As must be clear from this, I am not one to believe that wholes elude description in 
terms of their parts. I believe that if we come to understand the "physics of concepts", 
then perhaps we can derive from it a "chemistry of creativity", just as we can derive 
the principles of the chemistry of atoms and molecules from those of the physics of 
quanta and particles. But as I said earlier, it is not just around the corner. Mental 
bonds will probably turn 

Variations on a Theme as the Crux of Creativity 


out to be no less subtle than chemical bonds. Alan Turing's words of cautious 
enthusiasm about artificial intelligence remain as apt now as they were in 1950, when 
he wrote them in concluding his famous article "Computing Machinery and 
Intelligence": "We can only see a short distance ahead, but we can see plenty there 
that needs to be done." 

Recently I happened to read a headline on the cover of a popular electronics 
magazine that blared something about "CHIPS THAT SEE". Bosh! I'll start believing 
in "chips that see" as soon as they start seeing things that never were, and asking 
"Why not?" 

Post Scriptum. 

Knobs, knobs, everywhere 
Just vary a knob to think. 

Some readers objected to the slogan of this column-that making variations on 
a theme is the crux of creativity. They felt-and quite rightly -that making variations 
(i.e., twisting knobs) is as easy as falling off a log. So how can genius be that easy? 
Part of the answer is: For a genius, it is easy to be a genius. Not being a genius would 
be excruciatingly hard for a genius. However, this isn't a completely satisfactory 
answer for people who pose this objection. They feel that I am unwittingly implying 
that it is easy for anybody to be a genius: after all, a crank can crank a knob as deftly 
as a genius can. The crux of their objection, then, is that the crux of creativity is not in 
twiddling knobs, but in spotting them! 

Well, that is exactly what I meant by my slogan. Making variations is not just 
twiddling a knob before you; part of the act is to manufacture the knob yourself. 
Where does a knob come from? The question amounts to asking: How do you see a 
variable where there is actually a constant ? More specifically: What might vary, and 
how might it vary? It's not enough to just have the desire to see something different 
from what is there before you. Often the dullest knobs are a result of someone's 
straining to be original, and coming up with something weak and ineffective. So 
where do good knobs come from? I would say they come from seeing one thing as 
something else. Once an abstract connection is set up via some sort of analogy or 
reminding-incident, then the gate opens wide for ideas to slosh back and forth 
between the two concepts. 

A simple example: A friend and I noticed a fuel-delivery truck pulling into a 
driveway, and on it was very conspicuously printed "NSF", standing for "North Shore 
Fuel". However, to us those letters meant "National Science Foundation" as surely as 
"TNT" means "trinitrotoluene" to Eve Rybody. Now, we could have just let the 
coincidence go, but instead we played with it. We envisioned a National Science 
Foundation truck pulling up to a 

Variations on a Theme as the Crux of Creativity 


research institute. The driver gets out of the cab, drags a thick flexible hose over to a 
hole in the wall of a building and inserts it, then starts up a loud motor, and pumps a 
truckload of money-presumably in large bills-into the cellar of the building. (Wouldn't 
it be nice if grants were delivered that way?) This vision then led us to pondering the 
way that money actually does flow between large institutions: usually as abstract, 
intangible numbers shot down wires as binary digits, rather than as greenbacks hauled 
about in large trucks. 

This very small incident serves well to illustrate how a simple reminding- 
incident triggered a series of thoughts that wound up in a region of idea-space that 
would have been totally unanticipable moments before. All that was needed was for 
an inappropriate meaning of "NSF" to come to mind, and then to be explored a bit. 
Such opportunities for being reminded of something remote- such double-entendre 
situations-occur all the time, but often they go unobserved. Sometimes the ambiguity 
is observed but shrugged off with disinterest. Sometimes it is exploited to the . hilt. In 
this example, the result was not earthshaking, but it did cast things in a new light for 
both of us, and the image amused us quite a bit. And this way of exploiting 
serendipity-that is, exploiting coincidences and unexpected perceived similarities-is 
typical of what I consider the crux of the creative process. 

* * * 

Serendipitous observation and quick exploration of potential are vital elements 
in the making of a knob. What goes hand in hand with the willingness to playfully 
explore a serendipitous connection is the willingness to censor or curtail an 
exploration that seems to be leading nowhere. It is the flip side of the risk-taking 
aspect of serendipity. It's fine to be reminded of something, to see an analogy or a 
vague connection, and it's fine to try to map one situation or concept onto another in 
the hopes of making something novel emerge-but you've also got to be willing and 
able to sense when you've lost the gamble, and to cut your losses. One of the 
problems with the ever-popular self-help books on how to be creative is that they all 
encourage "off-the-wall" thinking (under such slogans as "lateral thinking", 
"conceptual blockbusting", "getting whacked on the head", etc.) while glossing over 
the fact that most off-the-wall connections are of very little worth and that one could 
waste lifetimes just toying with ideas in that way. One needs something much more 
reliable than a mere suggestion to "think zany, out-of-the-system thoughts". 

Frantic striving to be original will usually get you nowhere. Far better to relax 
and let your perceptual system and your category system work together 
unconsciously, occasionally coming up with unbidden connections. At' that point, 
you-the lucky owner of the mind in question-can seize the bpportunity and follow out 
the proffered hint. This view of creativity has the 

Variations on a Theme as the Crux of Creativity 


conscious mind being quite passive, content to sit back and wait for the unconscious 
to do its remarkable broodings and brewings. 

The most reliable kinds of genuine insight come not from vague reminding 
experiences (as with the letters "NSF"), but from strong analogies in which one 
experience can be mapped onto another in a highly pleasing way. The tighter the fit, 
the deeper the insight, generally speaking. When two things can both be seen as 
instances of one abstract phenomenon, it is a very exciting discovery. Then ideas 
about either one can be borrowed in thinking about the other, and that sloshing-about 
of activity may greatly illumine both at once. For instance, such a connection (i.e., 
mapping) -between sexism and racism-resulted in my "Person Paper" (Chapter 8). 
Another example is Scott Kim's brilliant article "Noneuclidean Harmony", in which 
mathematics and music are twisted together in the most amazing ways. It can be 
found in The Mathematical Gardner, an anthology dedicated to Martin Gardner, 
edited by David Klarner. 

A mapping-recipe that often yields interesting results is projection of oneself 
into a situation: "How would it be for me?" This can mean a host of things, depending 
on how you choose to inject yourself into the scene, which is in turn determined by 
what grabs your attention. The man who focused in on the bustling activity in the 
coffeehouse and said, "I'm sure glad I'm not a waitress here tonight!" might instead 
have been offended by the sounds reaching his ears and said, "If I were the owner 
here, I'd play less Muzak" -or he might have zeroed in on someone purchasing a 
brownie and said, "I wish I were that thin." People are remarkably fluid at seeing 
themselves in roles that they self-evidently could never fill, and yet the richness of the 
insights thus elicited is beyond doubt. 

* * * 

When I first heard the French saying Plus ga change, plus c'est la meme chose, 
it struck me as annoyingly nonsensical: "The more it changes, the samer it gets" (in 
my own colloquial translation). I was not amused but nonetheless it stuck in my mind 
for years, and finally it dawned on me that it was full of meanings. My favorite way 
of interpreting it is this. The more different manifestations you observe of one 
phenomenon, the more deeply you understand that phenomenon, and therefore the 
more clearly you can see the vein of sameness running through all those different 
things. Or put another way, experience with a wide variety of things refines your 
category system and allows you to make incisive, abstract connections based on deep 
shared qualities. A more cynical way of putting it, and probably more in line with the 
intended meaning, would be that superficially different things are often boringly the 
same. But the saying need not be taken cynically. 

Seeing clear to the essence of something unfamiliar is often best achieved by 
finding one or more known things that you can see it as, then being able to balance 
these views. Physicists have long since learned to juggle two views 

Variations on a Theme as the Crux of Creativity 


of light: light as waves, light as particles. They know that each contains a grain of the 
essence of light, that neither contains it all, and they know when to think of light 
which way. Don't be fooled by people who knowingly assure you that physicists don't 
depend on crude images or analogies as crutches, that everything they need is 
contained in their formulas. The fallacy here 

. is that which formula to apply, how to apply it, and what parts of it to neglect 
are all aspects not covered in any formula, which is why doing physics is a great art, 
despite the fact that there are formulas all over the place for Eve Rybody and her 
brother to use. 

Seeing anything as waves suggests immediate knobs: wavelength, frequency, 
amplitude, speed, medium, and a host of other basic notions that define the essence of 
undularity. Seeing anything as particles suggests totally different knobs: mass, shape, 
radius, rotation, constituents, and a host of other basic notions that define the essence 
of corpuscularity. If you choose to see, say, people as waves or as particles, you may 
find some of these suggested knobs quite interesting. On the other hand, it may not be 
'fruitful to do so. Good analogies usually are not the product of an off-the-wall 
suggestion like this, but spring to mind unbidden, from the deep similarity- searching 
wells of the unconscious. 

Once you have decided to try out a new way of viewing a phenomenon, you 
can let that view suggest a set of knobs to vary. The, act of varying them will lead you 
down new pathways, generating new images ripe for perception in their own right. 
This sets up a closed loop: 

• fresh situations get unconsciously framed in terms of familiar concepts; 

• those familiar concepts come equipped with standard knobs to twiddle; 

• twiddling those knobs carries you into fresh new conceptual territory. 

A visual image that I always find coming back in this context is that of a 
planet orbiting a star, and whose orbit brings it so close to another star that it gets 
"captured" and begins orbiting the second star. As it swings around the new star, 
perhaps it finds itself coming very close to yet another star, and ficklely changes 
allegiance. And thus it do-si-do's its way around the universe. 

The mental analogue of such stellar peregrinations is what the loop above 
attempts to convey. You can think of concepts as stars, and knob-twiddling as 
carrying you from one point on an orbit to another point. If you twiddle enough, you 
may well find yourself deep within the attractive zone of an unexpected but 
interesting concept and be captured by it. You may thus migrate from concept to 
concept. In short, knob-twiddling is a device that carries you from one concept to 
another, taking advantage of their overlapping orbits. 

Of course, all this cannot happen with a trivial model of concepts. We see 

Variations on a Theme as the Crux of Creativity 


it happening all the time in minds, but to make it happen in computers or to locate it 
physically in brains will require a fleshing-out of what concepts really are. It is fine to 
talk of "orbits around concepts" as a metaphor, but developing it into a full scientific 
notion that either can be realized in a computer model or can be located inside a brain 
is a giant task. This is the task that faces cognitive scientists if they wish to make 
"concept" a legitimate scientific term. This goal, suggested at the start of this article, 
could be taken to be the central goal of cognitive science, although such things are 
often forgotten in the inane hoopla that is surrounding artificial intelligence more and 
more these days. 

The cycle shown above spells out what I intend by the phrase "making 
variations on a theme", and it is this loop that I am suggesting is the crux of creativity. 
The beauty of it is that you let your memory and perceptual mechanisms do all the 
hard work for you (pulling concepts from dormancy); all you do is twiddle knobs. 
And I'll let you decide what this odd distinction is between something called "you" 
and the hard-working mechanisms of "your memory". 

* * * 

The concept of the "implicosphere" of an idea-the sphere of variations on it 
resulting from the twiddling of many knobs a "reasonable" amountis a difficult one, 
but it is absolutely central to the meaning of this column. One way of thinking about it 
is this. Imagine a single gnat attracted by a bright light. It will buzz about, tracing out 
a three-dimensional random walk centered on that light. If you keep a photographic 
plate exposed so that you can record its path cumulatively, you will first see a chaotic 
broken line, but soon the image will get so dense with criss-crossing lines that it will 
gradually turn into a circular smear of slowly increasing radius. At the outer edges of 
the smear you might once in a while make out an occasional foray of the lone bug. 
For a while, the territory covered expands, but eventually this gnat-o-sphere will reach 
a stable size. Its silhouette, instead of being a sharp-edged circle, will be a blurry 
circle (see Figure 12-5a) whose approximate radius reveals something about how 
gnats are attracted by lights. 

Now if you simply think of this translated into idea-space, you have roughly 
the right image. Of course, not all implicospheres have the same radius. Some 
people's implicospheres tend to have bigger radii than other people's do, and 
consequently their implicospheres overlap more. This can be good but it can be 
overdone. Too much overlap (Figure 12-5b) and all you have is a mush of vaguely 
associated ideas, an overdone and tasteless mental goulash. Too little overlap (Figure. 
12-5d) and you have a very thin, watery mind, one with few big surprises (except for 
the meta-level surprise of having so few surprises). There is, in other words, an 
optimum amount of overlap for useful creative insight (Figure 12-5c). This is the kind 
of thing 

Variations on a Theme as the Crux of Creativity 


that cannot be taught, however. It would be like trying to train a gnat to control the 
size of the spheres it traces out. Or if you prefer, it would be like trying to train an 
entire swarm of gnats to form spheres of a particular size whenever they cluster 
around lamps. The problem is, it is already preprogrammed in gnats how much they 
are attracted by lights, by each other, and so on. 

In my view, mindpower is a consequence of how implicospheres in idea-space 
emerge from the statistical predispositions of neurons to fire in response to each other. 
Such deep statistical patterns of each brain cannot be altered, although of course a few 
superficial aspects can be altered. You can teach somebody to think of applehood 
whenever they think of mother pie, for instance-but adding any number of specific 
new associative connections does not have any effect on the underlying statistics of 
how their neurons work. So in that sense I am gravely doubtful about courses or 
books that promise to improve your thinking style or capabilities. Sure, you can add 
new ideas-but that's a far cry from adding pizzazz. The mind's -'perceptual and 
category systems are too much at the "subcognitive" level to be reached via cognitive- 
level training techniques. If you are old enough to be reading this book, then your 
deep mental hardware has been in place for many years, and it is what makes your 
thinking- style idiosyncratic and recognizably "you". (If you are not, then what are you 
doing reading this book? Put it down immediately!) For more on the ideas of 
subcognition and identity, see Chapters 25 and 26. 

When a new idea is implanted in a mind, an implicosphere grows around it. 
Since this means, in essence, the linking-up of this new idea with older ideas, I call it 
"diffusion in idea-space". My canonical example of this phenomenon, although it is a 
rather grim one, has to do with the recent spate of random murders inspired by the 
spiking of Tylenol capsules with strichnine. It was the Food and Drug 
Administration's response that so intrigued me, because it implicitly revealed a theory 
of how this idea would diffuse in the idea- space of a typical potential murderer. The 
FDA imposed a set of packaging regulations on manufacturers, with various types of 
products being given various deadlines for compliance. The idea was that your 
potential murderer could slip from the idea of Tylenol to that of aspirin in a week's 
time, but it would take the expanding sphere longer to hit the brilliant idea that it 
could be just any over-the-counter drug. Not just the FDA seemed to think this way; 
also radio talk-show hosts seemed to love speculating about what drug might be 
chosen next-but I never heard them worrying about ordinary food in grocery stores. 
Yet why should it give a stochastic killer any less joy to kill by spiking ajar of 
mustard than by spiking a drug? In fact, if your goal in life is to see masses of random 
people die, there are all sorts of routes you can take that don't involve ingestion at all. 
A friend of mine took a train from Washington to New York and en route her train 
smashed into a washing machine full of rocks that had been placed on the tracks by 
some do-badder. Was this part of the Tylenol-murders 

Variations on a Theme as the Crux of Creativity 


implicosphere in the mind of the person who did it? I doubt it, but it is possible. 

In its own gruesome way, the generalization of the Tylenol murders resembles 
that of the expanding implicosphere of the Cube-and that of any idea that arises. 
Ideas, whether evil or beneficial, have their own dynamics of spreading in and among 
minds. Here we are primarily talking about intramind spreading (implicospheres), but 
intermind spreading (infectious memes) was discussed in Chapter 3. 

* * * 

Slippage of thought is a remarkably invisible phenomenon, given its ubiquity. 
People simply don't recognize how curiously selective they are in their "choice" of 
what is and what is not a hinge point in how they think of an event. It all seems so 
natural as to require no explanation. 

I dropped a slice of pizza on the floor of a pizza place the other evening. My 
friend Don, who was less hungry than I was, immediately sympathized, saying, "Too 
bad I didn't drop one of my pieces-or that you didn't drop one of mine instead of one 
of yours." Sounds sensible. But why didn't he say, "Too bad the pizza isn't larger"? 
His choice revealed that to his unconscious mind, it seemed sensible to switch the 
role-filler in a given event, as if to imply that a pizza- slice-droppage had been in the 
cards for that evening, that God had flipped a coin and, unluckily for me, it had come 
out with me as the dropper instead of Don-but that it might have come out the other 
way around. 

Some hypothetical replacement scenarios-I like to call them "subjunctive 
instant replays"-are compelling, and come to mind by reflex. They are not idle 
musings but very natural human emotional responses to a common type of 
occurrence. Other subjunctive instant replays have little intuitive appeal and seem far- 
fetched, although it is hard to say just why. Consider the following list: 

Too bad they didn't give us a replacement piece. 
Lucky we weren't in a really fancy restaurant. 

Too bad gravity isn't weaker, so that you could have caught it before it hit the 

Lucky it wasn't a beaker filled with poison. Too bad it wasn't a fork. Lucky it 
wasn't a piece of good china. Too bad eating off floors isn't hygienic. Lucky 
you didn't drop the whole pizza. 

Too bad it wasn't the people at the next table who dropped their pizza. Lucky 
there was no carpet in here. Too bad you were the hungry one, rather than me. 

Variations on a Theme as the Crux of Creativity 


I'll leave it to you to generate other subjunctive instant replays that he might have 
come up with. There is a rough rank ordering to them, in terms of plausibility of 
springing to mind. It's the rhyme and reason behind that ordering that fascinates me. 
Why do people find it not only plausible but even compelling to make remarks like 
the following? 

If Jesse Jackson were a white man, he'd be elected President. 

If Jesse Jackson were a white man, he'd be running for dogcatcher. 

These two sentences came from random voters, as quoted in Newsweek. I wonder 
what slips in people's minds when they imagine a white Jesse Jackson. Do they 
envision a preacher in a Baptist church? Is this person an ardent fighter for civil 
rights? Or, conversely, an ardent fighter against the quota system? Similarly, what 
does a high-school boy mean when he says, "If I were my father, I wouldn't lend me 
the car"? Does he ever notice that if he were his father, he would ipso facto be his 
own son? Or need that be so? Would the two have exchanged roles? The point is, 
there are a host of questions left completely open here, yet no one balks for a second 
at such counterf actuals. In fact, they are common currency, they are daily bread, they 
are the meat and potatoes of communication, But some types of counterfactuals never 
(or hardly ever) come up, while others, equally realityviolating, are a dime a dozen. 
Daniel Kahneman and Amos Tversky, cognitive psychologists, have made studies of 
how much emotion people generate upon reading stories of just-missed airplanes or 
just-caught airplanes-especially ones that crash. These kinds of near misses, whether 
fortunate or unfortunate, tug at our hearts and do so in nearly universal ways. 
Something about these slippability examples is truly at the core of what it is to be 
human and to experience the world through the filter of the human mind. 
Philosophers and artificial-intelligence researchers by and large have not paid much 
attention to the "catchiness" of a given counterfactual. Logicians have devoted a lot of 
time and effort to trying to figure out what it would mean for a given counterfactual to 
be true, but to my mind, that's not nearly as interesting-or even as meaningful-a 
question as these more psychological questions: 

Which counterfactuals are likely to be triggered in a human mind by various 
types of events in the world? 

Why are some events perceived to be "near misses", while others are not? 

Why are some deaths of innocent people viewed as more tragic than other 
deaths of innocent people? 

Variations on a Theme as the Crux of Creativity 


At such points where deep human emotion, identification with other beings, and 
perception of reality meet lies the crux of creativity-and also the crux of the most 
mundane thoughts. Spinning out variations is what comes naturally to the human 
mind, and is it ever fertile! 

Variations on a Theme as the Crux of Creativity 



Metafont, Metamathematics, 
and Metaphysics: Comments on 

.Donald Knuth's Article 
"The Concept of a Meta-Font 1 

August, 1982 

The Mathematization of Categories, and Metamathematics 

Donald Knuth has spent the past several years working of his system allowing him 
to control many aspects of the design forthcoming books-from the typesetting and layout 
down to the very shapes of the letters ! Seldom has an author had anything remotely like 
this power to control the final appearance of his or her work. Knuth's TEX typesetting 
system has become well-known and available in many countries around the world. By 
contrast, his METAFONT system for designing families of typefaces has not become as 
well known or as available. 

In his article "The Concept of a Meta-Font", Knuth sets forth for the first time the 
underlying philosophy of METAFONT, as well as some of its products. Not only is the 
concept exciting and clearly well executed, but in my opinion the article is charmingly 
written as well. However, despite my overall enthusiasm for Knuth's idea and article, 
there are some points in it that I feel might be taken wrongly by many readers, and since 
they are points that touch close to my deepest interests in artificial intelligence and 
esthetic theory, I felt compelled to make some comments to clarify certain important 
issues raised by "The Concept of a Meta-Font". 

Although his article is primarily about letterforms, not philosophy, Knuth holds 
out in it a philosophically tantalizing prospect for us: that with the 

Metafont, Metamathematics and Metaphysics 


arrival of computers, we can now approach the vision of a unification of all typefaces. 
This can be broken down into two ideas: 

(1) That underneath all 'A's there is just one grand, ultimate abstraction that can be 
captured in a finitely parametrizable computational structure-a "software 
machine" with a finite number of "tunable knobs" (we could say "degrees of 
freedom" or "parameters", if we wished to be more dignified); 

(2) That every conceivable particular V A' is just a product of this machine .with its 
knobs set at specific values. 

Beyond the world of letterforms, Knuth's vision extends to what I shall call the 
mathematization of categories: the idea that any abstraction or Platonic concept can be so 
captured-that is, as a software machine with a finite number of knobs. Knuth gives only a 
couple of examples-those of the "meta-waltz" and the "meta-shoe"-but by implication one 
can imagine a "meta-chair", a "meta-person", and so forth. 

This is perhaps carrying Knuth's vision further' than he ever intended. Indeed, I 
suspect so; I doubt that Knuth believes in the feasibility of such a "mathematization of 
categories" opened up by computers. Yet any imaginative reader would be likely to draw 
hints of such a notion out of Knuth's article, whether Knuth intended it that way or not. It 
is my purpose in this article to argue that such a vision is exceedingly unlikely to come 
about, and that such' intriguingly flexible tools as metashoes, meta- fonts, modern 
electronic organs (with their "oom-pah-pah" and "cha-cha-cha" rhythms and their canned 
harmonic patterns), and other many-knobbed devices will only help us see more clearly 
why this is so. The essential reason for this I can state in a very short way: I feel that to 
fill out the full- "space" defined by a category such as "chair" or "waltz" or "face" or V A' 
(see Figures 12-2, 12-3, and 12-4) is an act of infinite creativity, and that no finite entity 
(inanimate mechanism or animate organism) will ever be capable of producing all 
possible v A's and nothing but A's (the same could be said for chairs, waltzes, etc.). 

I am not making the trivial claim that, because life is finite, nobody can make an 
infinite number of creations; I am making the nontrivial claim that nobody can possess 
the "secret recipe" from which all the (infinitely many-) members of a category such as 
V A' can in theory be generated. In fact, my Maim is that no such recipe exists. Another 
way of saying this is that even if you were granted an infinite lifetime in which to draw 
all the v A's you could think up, thus realizing the full potential of any recipe you had, no 
matter how great it might be, you would still miss vast portions of the space of v A's. 

In metamathematical terms, this amounts to positing that any conceptual or semantic) 
category is a productive set, a precise notion whose characterization is a formal 
counterpart to the description in the previous paragraphs namely, a set whose elements 
cannot be totally enumerated by any effective procedure without overstepping the bounds 
of that set, but which can be 

Metafont, Metamathematics and Metaphysics 


approximated more and. more fully by a sequence of increasingly complex effective 
procedures). The existence and properties of such sets first became known as a result of 
Godel's Incompleteness Theorem of 1931. It is certainly not my purpose here to explain 
this famous result, but a short synopsis might be of help. (Some useful references are: 
Chaitin, DeLong, Nagel and Newman, Rucker, and my book Godel, Escher, Bach. ) 

An Intuitive Picture of Godel's Theorem 

Godel was investigating the properties of purely formal deductive systems in the 
sphere of mathematics, and he discovered that such systems-even if their ostensible 
domain of discourse was limited to one topic-could be viewed as talking "in code" about 
themselves. Thus a deductive system could express, in its own formal language, 
statements about its own capabilities and weaknesses. In particular, System X could say 
of itself through the Godelian code: 

System X is not powerful enough to demonstrate the truth of Sentence S. 

It sounds a little bit like a science-fiction robot called "ROBOT R-15" droning (of course 
in a telegraphic monotone): 


Now what happens if TASK T-12 happens, by some crazy coincidence, to be not the 
assembly of some strange cosmic device but merely the act* of uttering the preceding 
telegraphic monotone? (I say "merely" but of course that is a bit ironic.) Then ROBOT R- 
15 could get only partway through the sentence before choking: ROBOT R-15 

Now in the case of a formal system, System X, talking about its powers, suppose that 
Sentence G, by an equally crazy coincidence, is the one that says, 

System X is regrettably not powerful enough to demonstrate the truth of Sentence G. 

In such a case, Sentence G is seen to be an assertion of its own unprovability within 
System X. In fact we do not have to rely on crazy coincidences, for Godel showed that 
given any reasonable formal system, a G-type sentence for that system actually exists. 
(The only exaggeration in my English-language version of G is that in formal systems 
there is no way to say "regrettably".) In formal deductive systems, this foldback takes 
place of necessity by means of a Godelian code, but in English no Godelian code is 
needed and the peculiar quality of such a loop is immediately visible. 

Metafont, Metamathematics and Metaphysics 


If you think carefully about Sentence G, you will discover some amazing things. 
Could Sentence G be provable in System X? If it were, then System X would contain a 
proof for Sentence G, which asserts that System X contains no proof for Sentence G. 
Only if System X is blatantly self-contradictory could this happen-and a formal reasoning 
system that is self-contradictory is no more useful than a submarine with screen doors. 
So, provided we are dealing with a consistent formal system (one with no self- 
contradictions), then Sentence G is not provable inside System X. And since this is 
precisely the claim of Sentence G itself, we conclude that Sentence G is true-true but 
unprovable inside System. X. 

One last way to understand this curious state of affairs is afforded the reader by this 
small puzzle. Choose the more accurate of the following pair of sentences: 

(1) Sentence G is true despite being unprovable. 

(2) Sentence G is. true because it is unprovable. 

You'll know you've really caught on to "Godelism" when both versions ring equally 
true to your ears, when you, flip back and forth between them, savoring that exceedingly 
close approach to paradox that G affords. That's how twisted back on itself Sentence G 

The main consequence of Gs existence within each System X is that there are truths 
unattainable within System X, no matter how powerful and flexible System X is, as long 
as System X is not self-contradictory. Thus, if we look at truths as objects of desire, no 
formal system can have them all; in fact, given any formal system we can produce on 
demand a truth that it cannot have, and flaunt that truth in front of it with taunting cries of 
"Nyah, nyah!" The set of truths has this peculiar and infuriating quality of being 
uncapturable by any finite system, and worse, given any candidate system, we can use 
what we know about that system to come up with a specific Godelian truth that eludes 
provability inside that system. 

By adding that truth to the given system, we come up with an enlarged and slightly 
more powerful system-yet this system will be no less vulnerable to the Godelian devilry 
than its predecessor was. Imagine a dike that springs a new leak each time the proverbial 
Dutch boy plugs up a hole with his finger. Even if he had an infinite number of fingers, 
that leaky dike would find a spot he hadn't covered. A system that contains at least one 
unprovable truth is said to be incomplete, and a system that not only contains such truths 
but that cannot be rescued in any way from the fate of incompleteness is said to be 
essentially incomplete. Another name for sets with this wonderfully perverse property is 
productive. (For detailed coverage of the metamathematical ideas in this article, see the 
book by Rogers.) 

My claim-that semantic categories are productive sets-is, to be sure, not a 
mathematically provable fact, but a metaphor. This metaphor has been used by others 
before me-notably, the logicians Emil Post and John Myhill-and I have written of it 
myself before (see Chapter 23). 

Metafont, Metamathematics and Metaphysics 


Completeness and Consistency 

Note that it is important to have the potential to fill out the full (infinite) space, and - 
equally important not to overstep it. However, merely having infinite potential is not by 
any means equivalent to filling out the full space. After all, any existing METAFONT 
v A'-schema— even one having just one degree of freedom! -will obviously give us 
infinitely many distinct A's as we sweep its knob (or knobs) from one end of the 
spectrum to the other. Thus to have an A'-making machine with infinite variety of 
potential output is not in itself difficult; the trick is to achieve completeness: to fill the 

And yet, isn't it easy to fill the space? Can't one easily make a program that will 
produce all possible A's? After all, any A' can be represented as a pattern of pixels (dots 
that are either off or on) in an m X n matrix-hence a program that merely prints out all 
possible combinations of pixels in matrices of all sizes (starting with I X 1 and moving 
upwards to 2 X 1, 1 X 2, 3 X 1, 2 X 2, 1 X 3, etc., as in Georg Cantor's famous 
enumeration of the rational numbers) will certainly cover any given V A' eventually. This 
is quite true. So what's the catch? 

Well, unfortunately, it is hard-very hard-to write a screening program that will retain 
all the v A's in the output of this pixel-pattern program, and at the same time will reject all 
'K's, pictures of frogs, octopi, grandmothers, trolley cars, and precognitive photographs of 
traffic accidents in the twenty-fifth century (to mention just a few of the potential outputs 
of the generation program). The requirement that one must stay within the bounds of a 
conceptual category could be called consistency-a constraint complementary to that of 

In summary, what might seem desirable from a knobbed category machine is the joint 
attainment of two properties-namely: 

(1) Completeness: that all true members of a category (such as the category of A's 
or the category of human faces) should be potentially producible eventually as 

(2) Consistency: that no false members of the category ("impostors") should ever be 
potentially producible (in short, that the set of outputs of the machine should 
coincide exactly with the set of members of the intuitive category). 

The twin requirements of consistency and completeness are metaphorical equivalents 
of well-known notions by the same names in metamathematics, denoting desirable 
properties of formal systems (theorem-producing machines)-namely: 

(1) Completeness: that all true statements of a theory (such as the theory of 
numbers or the theory of sets) should be potentially producible eventually as 

Metafont, Metamathematics and Metaphysics 


(2) Consistency: that no false statements of the theory should ever be potentially 
producible (in short, that the set of theorems of the formal system should 
coincide exactly with the set of truths-of the informal theory). 

The import of Godel's Incompleteness Theorem is that these two idealized goals are 
unreachable simultaneously for any "interesting" theory (where "interesting" really means 
"sufficiently complex"); nonetheless, one can approach the set of truths by stages, using 
increasingly powerful formal systems to make increasingly accurate approximations. The 
goal of total and pure truth is, however, as unreachable by formal methods as is the speed 
of light by any material object. I suggest that a parallel statement holds for any 
"interesting" category (where again, "interesting" means something like "sufficiently 
complex", although it is a little harder to pin down): namely, one can do no better than 
approach the set of its members by stages, using increasingly powerful knobbed 
machines to make increasingly accurate approximations. 

Intuition at first suggests that there is a crucial difference between the 
(metamathematical) result about the nonformalizability of truth and the (metaphorical) 
claim about the nonmechanizability of semantic categories; this difference would be that 
the set of all truths in a mathematical domain such as set theory or number theory is 
objective and eternal, whereas the set of all v A's is subjective and ephemeral. However, 
on closer examination, this distinction begins to blur quite a bit. The very fact of Godel's 
proven nonformalizability of mathematical truth casts serious doubt on the objective 
nature of such truth. Just as one can find all sorts of borderline examples of 'A'-ness, 
examples that make one sense the hopelessness of trying to draw the concept's exact 
boundaries, so one can find all sorts of borderline mathematical statements that are 
formally undecidable in standard systems and that, even to a keen mathematical intuition, 
hover between truth and falsity. And it is a well-known fact that different mathematicians 
hold different opinions about the truth or falsity of various famous formally undecidable 
propositions (the axiom of choice in set theory is a classic example). Thus, somewhat 
counterintuitively, it turns out that mathematical truth has no fixed and eternal 
boundaries, either. And this suggests that perhaps my metaphor is not so much off the 

A Misleading Claim for METAFONT 

Whatever the validity and usefulness of this metaphor, I shall now try to show some 
evidence for the viewpoint that leads to it, using METAFONT as a prime example of a 
"knobbed category machine". In his article, Knuth comes perilously close, in one 
throwaway sentence, to suggesting that he sees METAFONT as providing us with a 
mathematization of categories. I doubt he suspected that anyone would focus in on that 
sentence as if it were the key sentence of the article-but as he did write it, it's fair game ! 
That sentence ran: 

Metafont, Metamathematics and Metaphysics 


The ability to manipulate lots of parameters may be interesting and fun, but does anybody 
really need a 6 1/7-point font that is one fourth of the way between Baskerville and 

This rhetorical question is fraught with unspoken implications. It suggests that 
METAFONT as it now stands (or in some soon-available or slightly modified version) is 
ready to carry out, on demand, for any user, such an interpolation between two given 
typefaces. There is something very tricky about this proposition that I suspect most 
readers will not notice: it is the idea that jointly parametrizing two typefaces is no harder, 
no different in principle, from just parametrizing one typeface in isolation. 

Indeed, to many readers, it would appear that Knuth already has carried out such a 
joint parametrization. After all, in printing Psalm 23 (Figure 12-1) didn't he move from 
an old-fashioned, compact, serifed face with relatively tall ascenders and descenders and 
small x- height all the way to the other end of the spectrum: a modern-looking, extended, 
sans-serif face with relatively short ascenders and descenders and large x- height? Yes, of 
course-but the critical omitted point here is that these two ends of the spectrum were not 
pre-existing, prespecified targets; they just happened to emerge as the extreme products 
of a knobbed machine designed so that one more or less intermediate setting of its knobs 
would yield a particular target typeface (Monotype Modern Extended 8A, in case you're 

In other words, this particular set of knobs was inspired solely and directly by an 
attempt to parametrize one typeface (Monotype Modern). The two extremes shown in the 
psalm are both variations on that single theme; the same can be said of every intermediate 
stage as well. There is only one underlying theme (Monotype Modern) here, and a cluster 
of several hundred variants of it, each one of which is represented by a single character. 
The psalm does not represent the marriage of two unrelated families, but simply exhibits 
many members of one large family. 

Joint Parametrization of Two Typefaces: 
A Far Cry from Parametrizing One Typeface 

You can envision all the variants of Monotype Modern produced by twiddling the 
knobs on this particular machine as constituting an "electron cloud" surrounding a single 
"nucleus" (see Figure 12-5a). Now by contrast, joint parametrization of two pre-existent, 
known typefaces (say, Baskerville and Helvetica, as Knuth suggests (see Figure 13-1) 
would be like a cloud of electrons swarming around two nuclei, like a chemical bond (see 
Figure 12-5c). 

In order to jointly parametrize two typefaces in METAFONT, you would need to find, 
for each pair of corresponding letters (say Baskerville 'a' and Helvetica V) a set of 
discrete geometric features (line segments, serifs, extremal points, points of curvature 
shift, etc.) that they share and that totally characterize them. Each such feature must be 
equated with one or 

Metafont, Metamathematics and Metaphysics 


abcdefghij klmnopqrstuv wxyz 



abcdefghijklmnopqrstuvwxyz ' 

FIGURE 13-1. Two typefaces of great beauty and subtlety. In (a), Baskerville; in (b), 
Helvetica Light. 

more parameters (knobs), so that the two letterforms are seen as produced by specific 
settings of their shared set of knobs. Moreover, all intermediate settings must also yield 
valid instances of the letter V. That is the very essence of the notion of a knobbed, 
machine, and it is also the gist of the quote, of course: that we should now (or soon) be 
able to interpolate between any familiar typefaces merely by knob -twiddling. 

Now I will admit that I think it is perhaps feasible-though much more difficult than 
parametrizing a single typeface-to jointly parametrize two typefaces that are not radically 
different. It is not trivial, to cite just one sample difficulty, to move between Baskerville's 
round dot over the T to Helvetica's square dot-but it is certainly not inconceivable. 
Conversely, it is not inconceivable to move between the elegant swash tail of the 
Baskerville 'Q; and the stubby straight tail of the Helvetica v Q;-but it is certainly not 

Moving from letter to letter and comparing them will reveal that each of these two 
typefaces has features that the other totally lacks. (Incidentally, you should disregard 
lowercase v g', since the v g's of our two typefaces are as different from each other as 
Baskerville 'B' is from Helvetica V H; in both cases, the two letterforms being compared 
derive from entirely different underlying "Platonic essences". It is METAFONT's 
purpose to mediate between' different stylistic renditions of a single "Platonic essence", 
not between distinct "Platonic essences".) Presumably, in a case where one typeface 
possesses some distinct feature that the other totally lacks, there is a way to fiddle with 
the knobs that will make the feature nonexistent in one but present in the other. For 
instance, a knob setting of zero might make some feature totally vanish. Sometimes it 
will be harder to make features disappear-it might require several knobs to have 
coordinated settings. Nonetheless, despite all the complex ways that Baskerville and 

Metafont, Metamathematics and Metaphysics 


Differ. I repeat, it is conceivable that somebody with great patience and ingenuity could 
jointly parametrize Helvetica and Baskerville. But the real question is this: Would such a 
joint parametrization easily emerge out of two separate, independently carried-out 
parametrizations of these typefaces? 

Hardly! The Baskerville knobs do not contain in them even a hint of the Helvetica 
qualities-or the reverse. How can I convince you of this? Well, just imagine how great the 
genius ofJohn Baskerville, an eighteenth-century Briton, would have had to be for his 
design to have implicitly defined another typeface-and a typeface only discovered (or 
invented) two centuries later, by Max Miedinger from Switzerland! To see this more 
concretely, imagine that someone who had never seen Helvetica naively created a 
METAFONT rendition of Baskerville (that is, a meta-font centered on Baskerville in the 
same sense as Knuth's sample meta-font is centered on Monotype Modern). Now imagine 
that someone else who does know Helvetica comes along, twiddles the knobs of this 
Baskerville meta-font, and actually produces a perfect Helvetica! It would be nearly as 
strange as having a marvelous music-composing program based exclusively on the style 
of Dr. William Boyce (who composed in England in a baroque, elegant eighteenth- 
century style) that was later discovered, totally unexpectedly, to produce many pieces 
indistinguishable in style from the music of Arthur Honegger (who composed in 
Switzerland in a sparse, crisp twentiethcentury style) when various melodic, harmonic, 
and rhythmic parameters were twiddled. To me, this is simply inconceivable; eighteenth- 
century style did not contain within it, no matter how implicitly, twentieth-century style - 
whether in music or in visual arts. 

Interpolating Between an Arbitrary Pair of Typefaces 

The worst is yet to come, however. Presumably Knuth did not wish us to take his 
rhetorical question in such a limited way as to imply that the numbers 6 1/7 and 1/4 were 
important. Pretty obviously, they were just examples of arbitrary parameter settings. 
Presumably, if METAFONT could easily give you a 6 1/7-point font that is 1/4 of the 
way between Baskerville and Helvetica, it could as easily give you an 1 1 2/3-point font 
that is 5/17 of the way between Baskerville and Helvetica-and so on. And why need it be 
restricted to Baskerville and Helvetica? Surely those numbers weren't the only "soft" 
parts of the rhetorical question! Common sense tells us that Helvetica and Baskerville 
were also merely arbitrary choices of typeface. Thus the hidden implication is that, as 
easily as one can twiddle a dial to change point size, so one can twiddle another dial (or 
set of dials) and arrive at any desired typeface, be it Helvetica, Baskerville, or whatever. 
Knuth might just as easily have put it this way: 

The ability to manipulate lots of parameters may be interesting and fun, but does 
anybody really need an n-point font that is x percent of the way between typeface 
TI and typeface T2 ? 

Metafont, Metamathematics and Metaphysics 


For instance, we might have set the four knobs to the following settings: 

n: 36 

x: 50 percent 
TI: Magnificat 
T2: Stop 

Each of these two typefaces (see Figure 13-2) is ingenious, idiosyncratic, and visually 
intriguing. I challenge any reader to even imagine a blend halfway between them, let 
alone draw it! And to emphasize the flexibility implied by the question, how about trying 
to imagine a typeface that is (say) one third of the way between Cirkulus and Block Up? 
Or one that is somewhere between Explosion and Shatter? (For these typefaces, see 
Figure 13-2.) 

A Posteriori Knobs and the Frame Problem of Al 

Shatter, incidentally, provides an excellent example of the trouble with viewing 
everything as coming from parameter settings. If you look carefully, . you will see that 
Shatter is indeed a "variation on a theme", the theme being Helvetica Medium Italic (see 
Figure 13-2). But does that imply that any meticulous parametrization of Helvetica would 
automatically yield Shatter as one of its knob-settings? Of course not. That is absurd. No 
one in their right mind would anticipate such a variation while parametrizing Helvetica, 
just as no one in their right mind when delivering their Nobel Lecture would say, "Thank 
you for awarding me my first Nobel Prize." When someone wins a Nobel Prize, they do 
not immediately begin counting how many they have won. Of course, if they win two, 
then a knob will spontaneously appear in most people's minds, and friends will very 
likely make jokes about the next few Nobel Prizes. Before the second prize, however, the 
"just-one" quality would have been an unperceived fact. 

This is closely related to a famous problem in cognitive science (the study of formal 
models of mental processes, especially computer models) called the frame problem. This 
knotty problem can be epitomized as follows: How do I know, when telling you I'll meet 
you at 7 at the train station, that it makes no sense to tack on the proviso, "as long as no 
volcano erupts along the way, burying me and my car on the way to the station", but that 
it does make reasonable sense to tack on the proviso, "as long as no traffic jam holds me 
up"? And of course, there are many intermediate cases between these two. The frame 
problem is about the question: What variables (knobs) is it within the bounds of normalcy 
to perceive? Clearly, no one can conceivably anticipate all the factors that might 
somehow be relevant to a given situation; one simply blindly hopes that the species' 
evolution and the individual's life experiences have added up to a suitably rich 
combination to make for satisfactory behavior most of the time. There are too many 
contingencies, however, to try to anticipate them all, even given the most powerful 
computer. One reason for the extreme difficulty in trying to make 

Metafont, Metamathematics and Metaphysics 






* * f C D I « H t 1 $ * * 4£*fr 






FIGURE 13-2. A series of diverse typefaces: (a) Magn f cat; (b) Stop; (c) Cirkulus; (d) 

Metafont, Metamathematics and Metaphysics 270 

machines able to learn is that we find it very hard to articulate a set of rules defining 
when it makes sense and when it makes no sense to perceive a knob. It is a fascinating 
task to work on making a machine capable of coaxing shy knobs out of the woodwork. 
This brings us back to Shatter, seen as a variation on Helvetica. Obviously, once you've 
seen such a variation, you can add a knob (or a few) to your METAFONT "Helvetica 
machine", enabling Shatter to come out. (Indeed, you could add similar "Shatterizing" 
knobs to your "Baskerville machine", for that matter!) But this would all be a posteriori: 
after the fact. The most telling proof of the artificiality of such a scheme is, of course, that 
no matter how many variations have been made on (say) Helvetica, people can still come 
up with many new and unanticipated varieties, such as: Helvetica Rounded, Helvetica 
Rounded Deco, Helvetican Flair, and so on (see Figure 13-3). 

No matter how many new knobs-or even new families of knobs-you add to your 
Helvetica machine, you will have left out some possibilities. People will forever be able 
to invent novel variations on Helvetica that haven't been foreseen by a finite 
parametrization, just as musicians will forever be able to devise novel ways of playing 
"Begin the Beguine" that the electronic 

FIGURE 13-3. Three "simple" offshoots of Helvetica: {a) Helvetica Rounded; (b) 
Helvetica Rounded Deco; (c) Helvetican Flair. 

» RSTUVWXYZ abcdefgh 

<»> RSTUVWXYZ abcdefg 


LMJ/I i^mNjjnopj^qR^ST^uuv 

WXYVZ a etbcdef jgghty j jkKJm 
iTi < rir^op < pqq > rr8^t^uv\/' wW xx^yz 

Metafont, Metamathematics and Metaphysics 


organ builders haven't yet built into their elaborate repertoire of canned rhythms, 
harmonies, and so forth. To be sure, the organ builders can always build in extra 
possibilities after they have been revealed, but by then a creative musician will have long 
since moved on to other styles. One can imagine Helvetica modified in many novel ways 
inspired by various extant typefaces. I leave it to readers to try to imagine such variants. 

A Total Unification of All Typefaces? 

The worst is still yet to come! Knuth's throwaway sentence unspokenly implies 
that we should be able to interpolate any fraction of the way between any two arbitrary 
typefaces. For this to be possible, any pair of typefaces would have to share the exact 
same set of knobs (otherwise, how could you set each knob to an intermediate setting?). 
And since all pairs of typefaces have the same set of knobs, transitivity implies that all 
typefaces would have to share a single, grand, universal, all-inclusive, ultimate set of 
knobs. (The argument is parallel to the following one: If any two people have the same 
number of legs as each other, then leg-number is a universal constant for all people.) 

Thus we realize that Knuth's sentence casually implies the existence of a 
"universal 'A'-machine"-a single METAFONT program with a finite set of parameters, 
such that any combination of settings of them will yield a valid V A', and conversely, such 
that any valid V A' will be yielded by some combination of settings of them. Now how can 
you possibly incorporate all of the previously shown typefaces into one universal 

Or look again at the 56 capital A's of Figure 12-3. Can you find in them a set of 
specific, quantifiable features? (For a comparable collection for each letter of the 
alphabet, see the marvelous collection of alphabetical logos compiled by Kuwayama.) 
Imagine trying to pinpoint a few dozen discrete features of the Magnificat A' (A7) and 
simultaneously finding their "counterparts" in the Univers V A' (D3). Suppose you have 
found enough to characterize both completely. Now remember that every intermediate 
setting also must yield an V A'. This means we will have every shade of "cross" between 
the two typefaces. 

This intuitive sense of a "cross" between two typefaces is common and natural, 
and occurs often to typeface lovers when they encounter an unfamiliar typeface. They 
may characterize the new face as a cross between two familiar typefaces ("Vivaldi is a 
cross between Magnificat and Palatino Italic Swash") or else they may see it as an 
exaggerated rendition of a familiar typeface ("Magnificat is Vivaldi squared") (see Figure 
13-4). What degree of truth is there to such a statement? All one can really say is that 
each Magnificat letter looks "sort of like" its Vivaldi counterpart, only about "twice as 
fancy" or "twice as curly" or something vague along those lines. But how could a single 
"curliness knob account for the mysteriously beautiful meanderings, organic and 
capricious in each Magnificat Letter? 

Metafont, Metamathematics and Metaphysics 




FIGURE 13-4. A transition from curved to whirly to superswirly: (a) Palatino Italic 
Swash caps; (b) Vivaldi caps; (c) Magnzcat caps. It is provocative to compare this figure 
with Figure 16-7. 

Can you imagine twisting one knob and watching thin, slithery tentacles begin to grow 
out of the Palatino Italic V A', snaking outwards eventually to form the Vivaldi V A', then 
continuing to twist and undulate into ever more sinuous forms, yielding the Magnificat 
V A' in the end? And-who says that that is the ultimate destination? If Magnificat is 
Vivaldi squared, then what is Magnificat squared? 

Specialists in computer animation have had to deal with the problem of 
interpolation of different forms. For example, in a television series about evolution, there 
was a sequence showing the outline of one animal form slowly transforming into another 
one. But one cannot simply tell the computer, "Interpolate between this shape and that 
one!" To each point in one there must be explicitly specified a corresponding point in the 
other. Then one lets the computer draw some intermediate positions on one's screen, to 
see if the choice works. A lot of careful "tuning" of the correspondences between figures 
must be done before the interpolation looks good. There is no recipe that works in general 
for interpolation. The task is deeply semantic, not cheaply syntactic. 

For a wonderful demonstration of the truth of this, look at the little book Double 
Takes, in which artist Tom Hachtman has a lot of fun taking unlikely pairs of people and 
combining their caricatures. His only prerequisite is that their names should splice 
together amusingly. Thus he did "Bing Cosby (Bing Crosby and Bill Cosby), "Farafat" 
(Farrah Fawcett-Majors and Yasir Arafat), "Marlon Monroe" (Marlon Brando and 
Marilyn Monroe), and many others. The trick is to discern which features of each person 
are the most characteristic and modular, and to be able to construct a new person having 

Metafont, Metamathematics and Metaphysics 


a subtle blend of those features, clearly enough that both contributoil be recognized. For 
a viewer, it's almost like trying to recognize t~ parents in a baby's face. 

The Essence of v A'-ness Is Not Geometrical 

Despite all the difficulties described above, some people, eve scrutinizing the 
wide diversity of realizations of the abstract 'A'-concept maintain that they all do share a 
common geometric quality. They sometimes verbalize it by saying that all v A's have "the 
same shape' or are "produced from one template". Some mathematicians are inclined to 
search for a topological or group-theoretical invariant. A typical suggestion might be: 
"All instances of 'A' are open at the bottom and closed at the top" Well, in Figure 12-3, 
sample A8 (Stop) seems to violate both of these criteria. And many others of the sample 
letters violate at least one of them. In several examples, such concepts as "open" or 
"closed" or "top" or "bottom" apply only with difficulty. For instance, is G7 (Sinaloa) 
open at the bottom? Is F4 (Calypso) closed at the top? What about A4 (Astra)? 

The problem with the METAFONT "knobs" approach to the V A' category is that 
each knob stands for the presence or absence (or size or angle, etc.) of some specifically 
geometric feature of a letter: the width of its serifs, the height of its crossbar, the lowest 
point on its left arm, the highest point along some extravagant curlicue, the amount of 
broadening of a pen, the average slope of the ascenders, and so forth and so on. But in 
many v A's, such notions are not even applicable. There may be no crossbar, or there may 
be two or three or more. There may be no curlicue, or there may be a few curlicues. 

A METAFONT joint parametrization of two A's presumes that they share the 
same features, or what might be called "loci of variability". It is a bold (and, I maintain, 
absurd) assumption that one could get any V A' by filling out an eternal and fixed 
questionnaire: "How wide is its crossbar? What angle do the two arms make with the 
vertical? How wide are its serifs?" (and so forth). There may be no identifiable part that 
plays the crossbar role, or the left-arm role; or some role may be split among two or more 
parts. You can easily find examples of these phenomena among the 56 v A's in Figure 12- 
3. Some other examples of what I call role splitting, role combining, role transferral, role 
redundancy, role addition, and role elimination are shown in Figure 13-5. These terms 
describe the ways that conceptual roles are apportioned among various geometric entities, 
which are readily recognized by their connectedness and gentle curvatures. 

For a remarkable demonstration of ways to exploit these various role- 
manipulations, see Scott Kim's book Inversions, in which a single written specimen, or 
"gram", has more than one reading, depending on the observer's point of view. Often the 
"grams" are symmetric and read the same both ways, but this is not essential: some have 
two totally different 

Metafont, Metamathematics and Metaphysics 


(a) , ^ 






FIGURE 13-5. Examples off {a) role splitting; (b) role merging; (c) role transferal; (d) 
role redundancy; (e) role addition; and (f) role elimination. The idea in all these 
examples is that one smooth sweep of the pen need not fill exactly one coherent 
conceptual role. It may fill two or more roles (or parts of two or more); it may fill less 
than one, in which case several strokes combine to make one role; and so on. Sometimes 
roles can be added or deleted without serious harm to the recognizability of the letter. 
Angles, cusps, intersections, endpoints, extrema, blank areas, and separations often play 
roles no less vital than those played by strokes. ' 

readings. The essence is imbuing a single written form with ambiguity. Both Scott and I 
have for years done such drawings -dubbed "ambigrams" by a friend of mine-and a few of 
my own are presented in Figure 13-6, as well as the one on the half-title page. The 
strange fluidity of letterforms is brought out in a most vivid way by ambigrammatic art. 

Incidentally, it is most important that I make it clear that although I find it easier 
to make my points with somewhat extreme or exotic versions of letters (as in ambigrams 
or unusual typefaces), these points hold just as strongly for more conservative letters. 
One simply has to look at a finer grain size, and all the same kinds of issues reappear. 

Chauvinism versus Open-Mindedness: 
Fixed Questionnaires versus Fluid Roles 

When I was twelve, my family was about to leave for Geneva, Switzerland for a 
year, so I tried to anticipate what my school would be like. The furthest my imagination 
could stretch was to envision a school that looked exactly like my one-story Californian 
stucco junior high school, only with classes in French (twiddling the "language" knob) 
and with the schoolbus that would pick me up each morning perhaps pink instead of 
yellow (twiddling the "schoolbus color" knob). I was utterly incapable of anticipating the 

Metafont, Metamathematics and Metaphysics 


FIGURE 13-6. Several ambigrams by the author. Deciphered, they say: "ambigram"; "ambt- 
grams "; "winter"; "spring"; "summer"; "fall"; "Ijtt Sallow "; "Josh Bell"; "Alejandro" and 
"MagdaUna" (reflections of each other k "Carol"; "David Moser"; "Chopin"; and "Johann 

Metafont, Metamathematics and Metaphysics 







Sebastian Hark". All three composers ' names utilize 90-degree rotation. Notice the extensive use 
of all the devices shown in Figure J3-5. namely role splitting, merging, trmuferral. redundancy, 
addition, and elimination. See the half title page for a further ambigram by the author. 

Metafont, Metamathematics and Metaphysics 


difference that there actually turned out to be between the Geneva school and my 
California school. 

Likewise, there are many "exobiologists" who have tried to anticipate the features 
of extraterrestrial life, if it is ever detected. Many of them have made assumptions that to 
others appear strikingly naive. Such assumptions have been aptly dubbed chauvinisms by 
Carl Sagan. There is, for instance, "liquid chauvinism", which refers to the phase of the 
medium in which the chemistry of life is presumed to take place. There is "temperature 
chauvinism", which assumes that life is restricted to a temperature range not too different 
from that here on the planet earth. In fact, there is planetary chauvinism-the idea that all 
life must exist on the surface of a planet orbiting a certain type of star. There is carbon 
chauvinism, assuming that carbon must form the keystone of the chemistry of any sort of 
life. There is even speed chauvinism, assuming that there is only one "reasonable" rate 
for life to proceed at. And so it goes. 

If a Londoner arrived in New York, we might find it quaint (or perhaps pathetic) 
if he or she asked "Where is your Big Ben? Where are your Houses of Parliament? 
Where does your Queen live? When is your teatime?" The idea that the biggest city in the 
land need not be the capital, need not have a famous bell tower in it, and so on, seem 
totally obvious after the fact, but to the naive tourist it can come as a surprise. (See 
Chapter 24 for more on strange mappings between Great Britain and the United States.) 
The point here is that when it comes to fluid semantic categories such as V A', it is equally 
naive to presume that it makes sense to refer to "the crossbar" or "the top" or to any 
constant feature. It is quite like expecting to find "the same spot" in any two pieces of 
music by the same composer. The problem, I have found, is that most people continue to 
insist that any two instances of V A' have "the same shape", even when confronted with 
such pictures as Figure 12-3. Figure 12-4 helps, however, to dispel that sort of notion (as 
does Figure 24-13). 

The analogy between Britain and the United States is a useful one to continue for 
a moment. The role that London plays in England is certainly multifaceted, but two of its 
main facets are "chief commercial city" and "capital". These two roles are played by 
different cities in the U.S. On the other hand, the role that the American President plays 
in the U.S. is split into pieces in Britain, part being carried by the Queen (or King), and 
part by the Prime Minister. Then there is a subsidiary role played by the President's wife- 
the "First Lady". Her counterpart in Britain is also split, and moreover, these days, "wife" 
has to be replaced by "husband", no matter whether one considers that the "President of 
England" is the Queen or the Prime Minister. (Again, see Chapter_ 24 for much more 
detail on this kind of analogy problem.) 

To think one can anticipate the complete structure of one country or language 
purely on the basis of being intimately familiar with another one is presumptuous and, in 
the end, preposterous. Even if you have seen 

Metafont, Metamathematics and Metaphysics 


dozens, you have not exhausted the potential richness and novelty in such domains. In 
fact, the more instances you have seen, the more circumspect you are about making 
unwarranted presumptions about unseen instances, although-a bit paradoxically-your 
ability to anticipate the unanticipated (or unanticipable) certainly improves! The same 
holds for instances of any letter of the alphabet or other semantic category. 

The A' Spirit 

Clearly there is much more going on in typefaces than meets the eyeliterally. The 
shape of a letterform is a surface manifestation of deep mental abstractions. It is 
determined by conceptual considerations and balances that no finite set of merely 
geometric knobs could capture. Underneath or behind each instance of V A' there lurks a 
concept, a Platonic entity, a spirit. This Platonic entity is not an elegant shape such as the 
Univers V A' (D3), not a template with a finite number of knobs, not a topological or 
group theoretical invariant in some mathematical heaven, but a mental abstraction -a 
different sort of beast. Each instance of the 'A' spirit reveals something new about the 
spirit without ever exhausting it. The mathematization of such a spirit would be a 
machine with a specific set of knobs on it, defining all its "loci of variability" for once 
and for all. I have tried to show that to expect this is simply not reasonable. In fact, I 
made the following claim, above: 

No matter how many new knobs-or even new families of knobs-you add to your.... 
machine, you will have left out some possibilities. People will forever be able to 
invent novel variations .... that haven't been foreseen by a finite parametrization... . 

Of what, then, is such an abstract "spirit" composed? Or is it simply a mystically 
elusive, noncapturable essence that defies the computationalindeed, the scientific- 
approach totally? Not at all, in my opinion. I simply think that a key idea is missing in 
what I have described so far. And what is this key idea? I shall first describe the key 
misconception. It is to try to capture the essence of each separate concept in a separate 
"knobbed machine"-that is, to isolate the various Platonic spirits. The key insight is that 
those spirits overlap and mingle in a subtle way. 

Happy Roles, Unhappy Roles, and Quirk-Notes 

The way I see it, the Platonic essence lurking behind any concrete letterform is 
composed of conceptual roles rather than geometric parts. (A related though not identical 
notion called "functional attributes" was discussed by Barry Blesser and co-workers in 
Visible Language as early as 1973.) A role, in my sense of the term, does not have a fixed 
set of parameters defining the extent of its variability, but it has instead a set of 

Metafont, Metamathematics and Metaphysics 


tests or criteria to be applied to candidates that might be instances of it. For a candidate to 
be accepted as an instance of the role, not all the tests have to be passed; not all the 
criteria have to be present. Instead, the candidate receives a score computed from the tests 
and criteria, and there is a threshold point above which the role is "happy" and below 
which it is "unhappy". Then below that, there is a cutoff point below which the role is 
totally dissatisfied, and rejects the candidate outright. 

An example of such a role is that of "crossbar". Note that I am not saying 
"crossbar in capital 'A'", but merely "crossbar". Roles are modular: they jump across 
letter boundaries. The same role can exist in many different letters. This is, of course, 
reminiscent of the fact that in METAFONT, a serif (or generally, any geometric feature 
shared by several letters) can be covered by a single set of parameters for all letters, so 
that all the letters of the typeface will alter consistently as a single knob is turned. One 
difference is that my notion of "role" doesn't have the generative power that a set of 
specific knobs does. From the fact that a given role is "happy" with a specific geometric 
filler, one cannot deduce exactly how that filler looks. There is, of course, more to a role's 
"feelings" about its filler than simply happiness or unhappiness; there are a number of 
expectations about how the role should be filled, and the fulfillment (or lack thereof) can 
be described in quirk-notes. Thus, quirk-notes can describe the unusual slant of a crossbar 
(see Arnold Bocklin-El in Figure 12-3), the fact that it is filled by two strokes rather than 
one (Airkraft-E3), the fact that it fails to meet (or has an unusual way of meeting) its 
vertical mate (Eckmann Schrift-A2; Le Golf-F5), and many other quirks. 

These quirk-notes are characterizations of stylistic traits of a perceived letterform. 
They do not contain enough information, however, to allow a full reconstruction of that 
letterform, whereas a METAFONT program does contain enough information for that. 
However, they do contain enough information to guide the creation of many specific 
letterforms that have the given stylistic traits. All of them would be, in some sense, "in 
the same style". 

Modularity of Roles 

The important thing is that this modularity of roles allows them to be exported to 
other letters, so that a quirk-note attached to a particular role in A' could have relevance 
to E', 'L', or T. Thus stylistic consistency among different letters is a by-product of the 
modularity of roles, just as the notion of letter- spanning parameters in METAFONT 
gives rise to internal consistency of any typeface it might generate. 

Furthermore, there are connections among roles so that, for instance, the way in 
which the "crossbar" role is filled in one letter could influence the way that the "post" or 
"bowl" or "tail" role is filled in other letters. This is to avoid the problem of overly 
simplistic mappings of one letter onto another, analogous to the Londoner asking an 
American where the 

Metafont, Metamathematics and Metaphysics 


American Houses of Parliament are. Just as one must interpret "Houses of Parliament" 
liberally rather than literally when "translating" from England to' the U.S., so one may 
have to convert "crossbar" into some other role when looking for something analogous in 
the structure of a letter other than 'A', such as 'N'. In certain typefaces, the diagonal stroke 
in 'N' could well be the counterpart of the crossbar in A'. But it is important to emphasize 
that no fixed (i.e., typeface-independent) mapping of roles in A' onto roles in 'N' will 
work; only the specific letterforms themselves (via their quirk-notes) can determine what 
roles (if any) should be mapped onto each other. Such cross-letter mappings must be 
mediated by a considerable degree of understanding of what functions are fulfilled by all 
the roles in the, two particular letters concerned. 

Typographical Niches and Rival Categories 

So far I have sketched very quickly a theory of "Platonic essences" or "letter 
spirits" involving modular roles-roles shared among several letters. This sharing of roles 
is one aspect of the overlapping and mingling that I spoke of above. There is a second 
aspect, which is suggested by the phrase typographical niche. The notion is analogous to 
that of "ecological niche". When, in the course of perception of a letterform, a group of 
roles have been activated and have decided that they are present (whether happily or 
unhappily), their joint presence constitutes evidence that one of a set of possible letters is 
present. (Remember that since a role is not the property of any specific letter, its presence 
does not signal that any specific letter is in view.) 

For instance, the presence of a "post" role and a "bowl" role in certain relative 
positions would suggest very strongly that there is a 'b' present. Sometimes there may be 
evidence for more than one letter. The eye-mind combination is not happy with any such 
unstable state for long, and strains to make a decision. It is as if there is a very steep and 
slippery ridge between valleys, and a ball dropped from above is very unlikely to come to 
settle on top of the ridge. It will tumble to one side or the other. The valleys are the 
typographical niches. 

Now, the overlapping of letters comes about because each letter is aware of its 
typographical rivals, its next-door neighbors, just over the various ridges that surround its 
space. The letter 'h', for instance, is acutely sensitive to the fact that it has a close rival in 
'k', and vice versa (see Figure 13-7). The letter 'T is very touchy about having its crossbar 
penetrated by the post below, since even the slightest penetration is enough too destroy 
its 'T'-ness and to slip it over into 'T's arch-rival niche, 't'. It's a low ridge, and for that 
reason, 'T guards it extra-carefully. 

Metafont, Metamathematics and Metaphysics 





FIGURE 13-7. Have we "hen" or "ken" here? In each case, two niches in the Platonic 
alphabet compete for possession of a single physical specimen. Again, the fluid way in 
which minds are willing to let roles and fillers align is the source of all the trouble. 

This image is, I hope, sufficiently strong to convey the second sense of 
overlapping and intermingling of Platonic essences. "No letter is an island", one might 
say. There has to be much mutual knowledge spread about among all the letters. Letters 
mutually define each others' essences, and this is why an isolated structure supposedly 
representing a single letter in all its glory is doomed to failure. 

A letterform-designing computer program based on the above-sketched notions of 
typographical roles and niches would look very different from one that tried to be a full 
"mathematization of categories". It would involve an integration of perception with 
generation, and moreover an ability to generalize from a few letterforms (possibly as few 
as one) to an entire typeface in the style of the first few. It would not do so infallibly; but 
of course it is not reasonable to expect "infallible" performance, since stylistic 
consistency is not an objectively specifiable quality. 

In other words, a computer program to design typefaces (or anything else with an 
esthetic or subjective dimension) is not a conceptual impossibility; 

Metafont, Metamathematics and Metaphysics 282 

The Intermingling of Platonic Essences 

but one should realize that, no less than a human, any such program will necessarily have 
a "personal" taste-and it will almost certainly not be the same as its designer's (or 
designers') taste. In fact, to the contrary, the program's taste will quite likely be full of 
unanticipated surprises to its programmers (as well as to everyone else), since that taste 
will emerge as an implicit and remote consequence of the interaction of a myriad features 
and factors in the architecture of the program. Taste itself is not directly programmable. 
Thus, although any esthetically programmed computer will be "merely doing what it was 
programmed to do", its behavior will nonetheless often appear idiosyncratic and even 
inscrutable to its programmers, reflecting the fact-well known to programmers -that often 
one has no clear idea (and sometimes no idea at all) just what it is that one has 
programmed the machine to do ! 

I have made a broad kind of claim: that true understanding of letterforms depends 
on more than understanding something about each Platonic letter in isolation; it depends 
equally much on taking into account the ways that letters and their pieces are interrelated, 
on the ways that letters depend on each other to define a total style. In other words, any 
approach to the impossible dream of the "secret recipe" for 'A'-ness requires a 
simultaneous solution to two problems, which I call the vertical and the horizontal 
problems (see Figures 13-8 and 24-14). 

Vertical- What do all the items in any column have in common? 
Horizontal: What do all the items in any row have in common? 

FIGURE 13-8. The vertical and horizontal problems. What do all the items in any column 
have in common ? What do all the items in any row have in common ? Answers: Letter; 
Spirit. (Compare this figure with Figure 24-14.) 

The Vertical and Horizontal Problems 
Two Equally Important Facets of One Problem 












c d e 

c d e 

c d e 

c d e 

c d e 

c d e 



Metafont, Metamathematics and Metaphysics 





abcdefghi]'kim nopqrstuv wxyz 






FIGURE 13-9. Six elegant faces created by the contemporary designer Hermann Zapf. In 
(a), Optima; in (b), Palatino; in (c), Melior; in (d), Zapf Book; in (e), Zapf International; 
and in (f), Zapf Chancery . 

Actually, there is no reason to stop with two dimensions; the problem seems to exist at 
higher degrees of abstraction. We could lay out our table of comparative typefaces more 
carefully; in particular, we could make it consist of many layers stacked on top of each 
other, as in a cake. On each layer would be aligned many typefaces made by a single 
designer. This idea is illustrated in Figure 13-9, showing a few faces designed by 
Hermann Zapf (Optima, Palatino, Melior, Zapf Book, Zapf International, and Zapf 
Chancery). Along with the Zapf layer, one can imagine a Frutiger layer, a Lubalin layer, 
a Goudy layer, and so on. One could try to arrange the typefaces in each layer in such a 
way that "corresponding" typefaces by various designers are aligned in "shafts". 
Now in this three-dimensional cake, the two earlier one-dimensional questions still apply, 
but there is also a new two-dimensional question: What do all the items in a given layer 
have in common? The third dimension can be explored as one moves from one layer to 
another, asking what all the 

Metafont, Metamathematics and Metaphysics 


typefaces in a given "shaft" have in common. Moreover, a fourth dimenst can be added if 
you imagine many such "layer cakes", one for each distinguishable period of 
typographical design. Thus our fourth dimension, like Einstein's, corresponds to time. 
Now one can ask about each layer cake: What do all the items herein have in common? 
This is a three-dimensional question. Presumably, one could carry this exercise even 

If we go back to the "simplest" of these questions, the original "vertical" question 
applying to Figure 13-8, a naive answer to it could be stated in one word: Letter. And 
likewise, a naive answer to the "horizontal" question of that figure is also statable in onl' 
word: Spirit. In fact, the word "spirit" is applicable, in various senses of the term, to all 
the higher-dimensional questions, such as "What do all the typefaces produced in the Art 
Deco era have in common?" There is such a thing, ephemeral though it may be, as "Art 
Deco spirit", just as there is undeniably such a thing as "French spirit" in music or 
"impressionistic spirit" in art. (Marcia Loeb has recently designed a whole series of 
typefaces in the Art Deco style, in case anyone doubts that the spirit of those times can be 
captured. And then there is the book Zany Afternoons by Bruce McCall, in which the 
entire spirit of several recent decades is wonderfully spoofed on all stylistic levels 

Stylistic moods permeate whole periods and cultures, and they indirectly 
determine the kinds of creations-artistic, scientific, technological-that people in them 
come up with. They exert gentle but definite "downward" pressures. As a consequence, 
not only are the alphabets of a given period and area distinctive, but one can even 
recognize "the same spirit" in such things as teapots, coffee cups, furniture, automobiles, 
architecture, and so on, as Donald Bush clearly demonstrates in his book The Streamlined 
Decade. One can be inspired by a given typeface to carry its ephemeral spirit over into 
another alphabet, such as Greek, Hebrew, Cyrillic, or Japanese. In fact, this has been 
done in many instances (see Figure 13-10). The problem I am most concerned with in my 
research is whether (or rather, how) susceptibility to such a "spirit" can be implanted in a 
computer program. 

Letter and Spirit 

These words "letter" and "spirit", of course, recall the contrast between the "letter 
of the law" and the "spirit of the law", and the way in which our legal system is 
constructed so that judges and juries will base their decisions on precedents. This means 
that any case must be "mapped", in a remarkably fluid way, by members of a jury, onto 
previous cases. It is up to the opposing lawyers, then, to be advocates of particular 
mappings; to try to channel the jury members' perceptions so that one mapping dominates 
over another. It is quite interesting that jury decisions are supposed to be unanimous, so 
that in a metaphorical sense, a "phase transition" or "crystallization" of opinion must take 
place. The decision must be solidly locked in, so that it reflects not simply a majority or 
even a consensus, but a totality, a unanimity (which, etymologically, means "one- 
souledness"). (For discussions of such "phase transitions of the mind", see Chapters 25 
and 26, and for descriptions of 

Metafont, Metamathematics and Metaphysics 


ography b to take. The selection of a congruous typeface, the 
quality and suitability for its purpose of the paper being used 

atresia rapHHTypoB LupmjiTa hk.h iohii k>lij h xica k Tpa.m'i 
mo niibic Tan coepeMettHhie piicymc iupmj)Ta, koto 

ruphy is to take. The selection of a congruous typeface, the qua 
iffy and suitability for its purpose of the paper being used, the 

dhicnja ?apnumypoo mpu0ma axAKtiakituuxkaii mpam 
utuo HHbie mait coepeMtuHwt pucyH* mpu0ma, xomop 


abcdef gh ijklmnopqrslu vwxyi 



abcdefghijklmnopqrstu vwxyz 

U l lUlll{'I>l>i'b3t0f1a6<Mdex3uuKtiM 
h onpcmy0xu nuua, btbitox 

abcdefghijklmn opqrstuvwxyz 


pography is to take, the selection of a congruous typefa 
ce, (he quality and suitability for its purpose of the pap 

*H K(j.\(iKj(Jf]m(r Kai r) drirdfi ooij oti}v iKTOnwon npo 
0610 pi&> pop<pq iroi) Oapa tuttw^vo Ktiptvo Tijv 
tm Xoyr] too dvckAoyou 6tp&aXpov, tijv TrotdTtjra r6 

6CnOlp|O l 7DIXJ yr|| D nmpxauoar^TJonTinrrDN 

EC NO LP] 0*77311/ a ^ c ^^ mn w |r stuvwxyz 

abcdef ghijkl- 


FIGURE 13-10. Transalphabetic leaps by the ethereal "spirit" inherent in a given 
typeface. In (a), we see the "Times" spirit jump across the gap between the Latin and 
Cyrillic alphabets. In (b), the "Optima " spirit transplants itself to Greek soil. In (c), a 
Hebrew spirit leaps out of the mirror and jumps into Latin clothes. Finally, in (d), a 
gigantic trans-Pacific (or trans-Asiatic) leap in which a Kana spirit (Japanese syllabic 
characters) jumps into Latin letters. 

Metafont, Metamathematics and Metaphysics 


In recent years there has been a spate of reported sightings of unidenl fled font- 
like objects (UFO's). Many people who claim to have seen UFO 's insist that they come 
from other planets. Some claim, for instance, to have seen Venusian written in the 
Baskerville style, while others say they have seen Martian in the Helvetica style. There 
are even claims of a complete Magn ycat-style Alphacentauribet! Often these claims are 
contradictory. For instance, one witness will maintain that the bowl of the g' was cigar- 
shaped, while another maintains equally vehemently that it resembled a saucer. Needless 
to say, not a single such sighting has ever been scientifically validated. 

computer models of perception in which a form of collective decisionmaking is carried 
out, see the book by McClelland, Rumelhart, and Hinton, and my article on the Copycat 

In law, extant rules, statutes, and so on, are never enough to cover all possible 
cases (reminding us once again of the fact that no fixed and rigid set of 'A'-defining rules 
can anticipate all v A's). The legal system depends on the notion that people, whose 
experience covers much more than the specific case and rules at hand, will bring to bear 
their full range of experience not only with many categories but also with the whole 
process of categorization and mapping. This allows them to transcend the specific, rigid, 
limited rules, and to operate according to more fluid, imprecise, yet more powerful 
principles. Or, to revert to the other vocabulary, this ability is what allows people to 
transcend the letter of the law and to apply its spirit. 

It is this tension between rules and principles, tension between letter and spirit, 


is so admirably epitomized for us by the work of Donald Knuth and others 
exploring the relationship between artistic design and mechanizability. We are entering a 
very exciting and important phase of our attempts to realize the full potential of 
computers, and Knuth's article points to many of the significant issues that must be 
thought through, very carefully. 

In summary, then, the mathematization of categories is an elegant goal, a 
wonderful beckoning mirage before us, and the computer is the obvious medium to 
exploit to try to realize this goal. Donald Knuth, whether he has been pulled by a distant 
mirage or by an attainable middle-range goal', has contributed immensely, in his work on 
METAFONT, to our ability to deal with letterforms flexibly, and has cast the whole 
problem of letters and fonts in a much clearer perspective than ever before. Readers, 
however, should not pull a false message out of his article: they should not confuse the 
chimera of the mathematization of categories with the quest after amore modest but still 
fascinating goal. In my opinion, one of the best things METAFONT could do is to inspire 
readers to chase after what Knuth has rightly termed the "intelligence" of a letter, making 
use of the explicit medium of the computer to yield new insights into the elusive "spirits" 
that flit about so tantalizingly, hidden just behind those lovely shapes we call "letters". 

Metafont, Metamathematics and Metaphysics 


Post Scriptum. 

Some months after this article appeared in Visible Language, the editor of that 
journal published a most interesting commentary by Geoffrey Sampson, now a professor 
in the Linguistics Department at the University of Leeds in England. Here are some 
extracts from his article, giving the gist of it: 

I believe that Douglas Hofstadter is unfair in his critique of Donald Knuth's 
"Meta-font" article .... Human life involves both open-ended categories and closed 
categories, and in many cases it is very hard to say whether a given intuitively 
familiar category is open-ended or closed .... Hofstadter writes as if Knuth assumes 
an obviously open-ended category to be closed; but I cannot see that Hofstadter has 
demonstrated this .... Baskerville and Helvetica are both book faces, rather than 
faces designed exclusively for display. On the other hand, the 56 'A's of 
Hofstadter's figure [Figure 12-3] are all drawn from display faces. It is much less 
obvious that the class of book faces is open-ended than that the class of display 
faces is .... 

If we restrict the task to book faces (which are the only faces discussed by 
Knuth) then the open-endedness of the range really does become questionable. 
Hofstadter denies that this restriction affects his point: with v more conservative 
letters .... one simply has to look at a finer grain size, and all the same kinds of 
issues reappear'. Do they? .... 

The only argument Hofstadter gives for this is the difficulty of 
'parametrizing' the contrast between the round dots of Baskerville T, j' and the 
square dots in Helvetica, and between the tails of V Q; in the two faces. But 
Hofstadter concedes that it is not 'inconceivable' that these problems could be 
solved. Furthermore it seems to me that the number of such points, where two faces 
differ with respect to some property of an individual letter in a way that appears not 
to be predictable on the basis of more general differences between the faces, is 
fairly limited. The tail of V Q is an oddity in many faces; likewise the terminal of 
V G'; but on the other hand if you know what (say) V P' looks like in a given book face 
you will have a very good idea what V D' or V H or V T looks like. 

I would suggest that it is an entirely reasonable research programme to 
attempt to define a finite (no doubt large) set of variables (many of which would no 
doubt be very subtle) which generate all roman book faces, including faces not 
explicitly taken into consideration when formulating the variables, and excluding 
pathological letterforms .... If Hofstadter's view of typography is correct, the task 
proposed will prove to be impossible: every extra face considered will force the 
addition of yet more independent variables to the meta-font. However, I believe we 
have no adequate reason to reach this negative conclusion a priori. 

When I first read this letter, I must admit, I felt that it made sense; that I had 
perhaps overstated my case. Sampson's point seemed reasonable. But then I started 
wondering, v Just where are the boundary lines of 

Metafont, Metamathematics and Metaphysics 


'book-face-ness'?" This issue is beautifully exemplified by a tacit assumptio made by 
Sampson. He calls Helvetica a book face, without any qualms. I doing so, he practically 
kicks the ball between his own goal posts for m Helvetica is almost always thought of as 
a display face, and is most ofte used in book titles and advertising displays. It is a sans- 
serif face, lik Optima, Eras, and many others of a similar vintage. I wonder what Sampso 
feels about serifed faces such as Goudy, Italia, Souvenir, Korinna, etc. (Se Figure 13-11.) 
Which of these would count as, display faces, and which a book faces? 

Treacherous waters, these. The "problem" (actually not a problem at all but a 
marvelous fact) is that the same typeface designers who design our favorite book faces 
also design our favorite display faces. And the same sense of style and joyous creation is 
called upon in both tasks. The way I 

FIGURE 13-11. Showing the futility of trying to draw a firm line between display 
faces an book faces. From top to bottom, we have: Eras Demi, Romic Light, Goudy Extra 
Bold, Itali Medium, Souvenir Light, and Korinna Extra Bold. It is easy to conceive of a 
book being printed in any of these faces (in a light weight), yet none is a standard book 







Metafont, Metamathematics and Metaphysics 


think of it is that each designer has a "wildness knob" with which to fiddle. When it's set 
low, the complexities and trickeries "retreat" into the nooks and crannies of the 
letterforms: how strokes terminate, swerve, change width, meet, and so on, and so the 
resulting typeface appears reserved and dignified, conventional yet graceful and stylish, 
still full of the designer's known characteristics. When wildness is set high, the desire for 
unusual, exuberant effects is let out of the closet, and the resulting typeface is full of bold 
flair and exciting, risky bravado: strokes are doubled, omitted, have extravagant shapes, 
flourishes, and so on. It is quite naive to think that low wildness means "the same old 
book-face knobs are twiddled" no matter who's doing it, whereas high wildness involves 
an open-ended set of concepts. 

No creative designer with any pride would ever feel content creating within a pre- 
set formula, a predetermined set of knobs. The joy of any kind of creation is in playing at 
the boundaries of what has been done. Every perceptive observer has an intuitive sense of 
the implicosphere centered on each standard letter and each role within it-a sense of just 
how daring various deviations will seem and of just where they will begin veering off 
into unacceptability. At the blurry boundaries of an implicosphere is exactly where an 
artist most loves to play. With wildness set low, a designer will flirt with the boundaries 
largely from within, making most decisions on the conservative side. With wildness set 
high, many more risks will be taken, and the flirting will carry the designer noticeably 
further from the implicosphere's center, like a satellite in a wider orbit. Norm violation is 
the name of the game in creation, no matter where the "wildness" knob is set. High 
wildness or low, it's still the same designer and the same creative forces expressing 
themselves. It's just a question of how subtly, how subduedly, those influences will show 

* * * 

Hermann Zapf is the designer of the famous sans-serif face Optima, a typeface 
that some books have been printed in (see Figure 13-9a). Optima is deceptively simple- 
looking. People tend to think that given one letter, they could determine all the rest easily. 
Sampson says as much: "If you know what (say) 'P' looks like in a given typeface, you 
will have a very good idea what 'D' or 'H or T looks like." But if that's the case, then 
why did it take Zapf-one of the world's foremost type designers-seven years to design it? 
All I can say is that there is rampant naivete about the complexity of letters, even among 
people who visually are otherwise very astute. 

A wonderful exercise to prove this to yourself is to try to draw the Helvetica Medium V 
by memory (see Exhibit 'a', that is, Figure 13- 12a). Study it for as long as you like, and 
then try to reproduce it. The better an eye you have, the more errors you will see you 
have made. Try it a few times. I myself must have attempted that 'a' several dozen times, 
and still I have never drawn it perfectly. This letter is one of my favorite letters of all 

Metafont, Metamathematics and Metaphysics 


FIGURE 13-12. Details of two classic typefaces: the 'a' of Helvetica Medium and the ^g' 
of Italia Book. 

and I have probably spent more time admiring it than any other letter-yet for all that, I 
still have not fathomed it entirely. 

The case of Helvetica is interesting. What is characteristic about it? It was one of 
the first typefaces in which negative and positive spaces were given equal attention. It 
employed very simple, nearly mathematical curves. Why was it designed only in 1958? 
Why did it take so long for such obvious things to be done so elegantly? It's like asking 
why the ancient Greeks, with their love of purity and elegance, didn't discover group 
theory, the branch of mathematics dealing with abstract binary operations. Well, some 
ideas are so abstract that even though they are glimpsed through a fog centuries earlier, 
their full-scale arrival takes much longer. (Group theory waited patiently for 2,000 years 
after the Greeks to be discovered! Isn't group theory patient with our species?) Thus it 
was with the pristine qualities of Helvetica. And what seems remarkable, but is actually 
to be expected, is that in the same year as Max Miedinger designed Helvetica, Adrian 
Frutiger designed Univers, a lovely typeface, in many ways nearly indistinguishable from 
Helvetica. Some ideas are just ripe at certain times. 

The ideas in Helvetica were not visible to anyone in the 1930's, even though 
people had thousands of book faces and display faces to look at. Likewise, the ideas in 
Snorple (a classic book face to be designed by Argli Snorple in 2027) are not visible to us 
today, even if, in some sense, they are implicitly defined by what is all around us. 
Cultural pressures, such as the development of computers and low-resolution digital 
typefaces, have profound impacts on how letters are perceived. Here is a striking 
example. When Hermann Zapf heard about the curve called a "super-ellipse"-an elegant 
mathematical interpolation between a circle and a square (or, more generally, between an 
ellipse and a rectangle), devised in the 1950's by the Danish scientist and author Piet 
Hein-he decided to base a typeface on that shape. The result: Melior, a now-standard 
book face whose "bowls" are super-ellipses (see Figure 13-9c). The point is, type 
designers are as susceptible as anyone else is to the subtle ebb and flow of cultural waves 
and evidence of those waves shows up in book faces no less than in display faces. You 
just have to look more closely. Book faces pose problems no less knotty than do display 
faces, Sampson notwithstanding. 

So on reconsideration, I stick with my point that all the same issues as 

Metafont, Metamathematics and Metaphysics 


apply to "wild" letterfdrms apply to "tame" ones-that one merely needs to look at a finer 
grain size to see the same kinds of problems. As I said above, modern book faces play 
with stroke tips in incredibly creative and surprising ways. Just look, for example, at 
Exhibit v g'-that is, the 'g' of Italia (Figure 13- 12b). Check out some of the other letters and 
then see what you think of Sampson's claim. 

* * * 

People tend to think that only extreme versions of things pose deep problems. 
That's why few people see modeling the creativity of, say, the trite television character of 
Archie Bunker as a difficult task. It's strange and disorienting to realize that if we could 
write a program that could compose Muzak or write trashy novels, we would be 99 
percent of the way to mechanizing Mozart and Einstein. Even a program that could act 
like a mentally retarded person would be a huge advance. The commonest mental 
abilities-not the rarest ones-are still the central mystery. 

John McCarthy, one of the founders of the field of artificial intelligence, is fond 
of talking of the day when we'll have "kitchen robots" to do chores for us, such as fixing 
a lovely Shrimp Creole. Such a robot would, in his view, be exploitable like a slave 
because it would not be conscious in the slightest. To me, this is incomprehensible. 
Anything that could get along in the unpredictable kitchen world would be as worthy of 
being considered conscious as would a robot that could survive for a week in the Rockies. 
To me, both worlds are incredibly subtle and potentially surprise-filled. Yet I suspect that 
McCarthy thinks of a kitchen as Sampson thinks of book faces: as some sort of simple 
and "closed" world, in contrast to "open-ended" worlds, such as the Rockies. This is just 
another example, in my opinion, of vastly underestimating the complexity of a world we 
take for granted, and thus underestimating the complexity of the beings that could get 
along in such a world. 

Ultimately, the only way to be convinced of these kinds of things is to try to write 
a computer program to get along in a kitchen, or to generate book faces. That's when you 
finally come face to face with the extremely limiting notion of what a knob really is. 
People's notion of knobs has too much intuitive fluidity to it. It's hard to identify with a 
computer and to see things utterly and foolishly rigidly-but that's where you have to 
begin if you want to understand why knobbifying the alphabet is a task of vast 
magnitude, and is a microcosm of the task of knobbifying all of human thought. 

* * * 

It is very tempting to think that a few degrees of freedom, when combined, can 
cover any possible situation. After all, the number of possible states of a multi-knob 
machine is the product of the numbers of settings of each of 

Metafont, Metamathematics and Metaphysics 


its knobs, and multiplying a bunch of relatively small numbers together gets you rapidly 
into large-number territory. A perfect illustration of this line of thought is given in an ad I 
once clipped for a book called Director's and Officer's Complete Letter Book, informally 
nicknamed The Ghost. Here is some of what that ad says: 

This is not a book on letter- writing technique: It is a collection of 133 
business letters already written and ready to use. They cover virtually every 
business situation you will ever meet. Just change a few words. They are arranged 
by subject, with 988 alternate phrases and sentences, keyed so that you can adapt 
the right letter to your purpose with almost no effort .... EditorJ. A. VanDuyn 
traveled for four years, collecting the finest examples of business letters written 
today. They're in crisp, direct, informal language, without cliches .... In 30 seconds 
you can look up the letter you need, by subject. You may need only to change the 
name, address, and half-a-dozen words. Or you may use one or more of the 
alternate phrases, sentences, or paragraphs on the facing page. In minutes, you've 
got your letter. With the personal touch you want. Perfectly suited to the sense you 
wish to convey .... 

Some letters are especially hard. When you're stuck for the tactful approach, 
the just-right expression of concern, the graceful apology, you'll be thankful you 
have The Ghost. Look at, some of these subjects: 

Letters to Public Officials; Declining Appointive or Elective Positions; 
Letters of Condolence; Letters of Apology; Soliciting for Charitable 
Contributions; Adjustments-When the Answer is "No"; Letters to Creditors; 
Contacting Inactive Accounts; Collection Letters; Requests for References-11 
chapters in all. 

New subjects are thoroughly covered. You'll find letters on contracting for 
computer services, apologizing for computer errors, contracting for hardware and 
software. Virtually every letter a business executive could ever need is here in The 
Ghost-waiting for you. 

I wonder if it contains letters that apologize for the mechanically written tone of recent 
letters, or letters that apologize for the incorrectly selected letter sent last time-and so on. 
The idea that anyone could think that every possible situation has been anticipated just 
boggles the mind. How credulous does one have to be to buy this book? (By the way, if 
you're interested, it costs only $49.95, and you can order it from Prentice-Hall, Inc., 
Englewood Cliffs, New Jersey 07632. But act now-it won't last long.) 

* * * 

In talking about knobs and creativity once with some architects, I encountered 
some advocates of "shape grammars" used to design houses, gardens, tea rooms, and so 
on. I was shown how a certain class of Frank Lloyd Wright houses known as his "prairie 
houses" had been parametrized 

Metafont, Metamathematics and Metaphysics 


and embedded in a- shaper grammar. An "article by H. Koning and J. Eizenberg presents 
the grammar and shows a large number of external and internal designs of pseudo- Wright 
houses. This kind of art by formula reminds me of the famous aleatoric waltz by Mozart, 
in which one-measure fragments can be assembled in any order to make an acceptable, if 
feeble, piece of music. Shape grammars recognize more levels of structure than Mozart 
did, but then he was doing it only as a joke. It seems to me, after perusing several articles 
on architectural shape grammars, that the designs they produce are respectable-in fact 
they are very similar to the input designs. But for that very reason, they strike me as 
rather dull and dry designs, given that they are all ex post facto. We are back at the issue 
of pseudo-Mondrian versus genuine Mondrian (see Figure 10-14 and the accompanying 
discussion), and the questionable artistic value in extracting features of a once-novel 
creation and using them to allow a machine to mimic or perhaps even improve upon that 
one creation, but always in a blatantly derivative way. 

Readers might be surprised to learn that one part of my research is not that distant 
from either shape grammars or METAFONT: the Han Zi project, whose goal is to make a 
program able to produce Chinese characters in a "twiddlable" style. All characters are 
reduced to smaller units, which in turn are reduced to smaller units, and so on, until the 
level of basic strokes is reached. Traditional Chinese calligraphers will tell you that there 
are seven or eight such basic strokes, but that is only for humans, whose vision and 
concepts are very fluid. For rigid machines, the number has to be increased. I have found 
that somewhere around 40 will suffice to make just about any character, although for 
most purposes 30 or 35 will do. The definition of each character is style-independent, 
which means that if you change the basic strokes, all characters will change in 
appearance. An example of this is shown in Figure 13-13, in which a short sentence is 
printed out by Han Zi in two different styles (and in which the program says two different 
things about its output). 

My co-worker David Leake and I do not harbor any illusions as to the generality 
of this approach to style in Chinese. It is quite obviously subject to all the limitations of 
any parameter-based approach to style: rigidity and non-creativity. Still, we find it an 
exciting challenge to try to do the best we can within the obvious limitations of such a 
system. It helps us see just how far these systems can be pushed, it teaches us more about 
Chinese writing, and perhaps best of all, it entertains and intrigues the many. Chinese 
students we know. 

* * * 

The creative, non-rut-stuck mind is always coming up with ideas that jump out of 
preconceived categories. A lovely cover on Science News (January 8, 1983) shows four 
new ideas for airplanes. One is a fuselage-less flying wing with six engines and with 
vertical tails at both ends of the wing. Another is 

Metafont, Metamathematics and Metaphysics 


T in 




FIGURE 13-13. Self-descriptive Chinese sentences. The upper one, in a rather 
calligraphic hand, says: "These Chinese characters I've written are really not bad. " The 
lower one, in a rather robot-like hand, says: "These Chinese characters I've written are 
really not good. " Both were written by the Han Zi program, with only about twenty basic 
strokes changed. The basic strokes themselves are shown in the boxes. All 50, 000 (or so) 
characters in the Chinese language can be built up by the Han Zi program from about 40 
distinct basic strokes, so that one can switch the visual mood of any passage simply .by 
switching 40 basic graphic objects. Still, we-David Leake and Tare nowhere near being 
able to capture, in a few simple stroke-redefinition, the creative variety of Figure 12-4. 
Our program does not see what it produces, and perception of what one has produced is 
essential to good creativity. 

a propeller-driven craft whose curvy propeller blades look more like flower petals than 
like fan blades. The third is a' plane whose two wings bend up and over its fuselage, 
meeting each other to form a complete circle (thus there is really only one wing, strictly 
speaking). The fourth shows a kind of "Siamese twin" plane, with one giant wing being 
shared by two parallel fuselages. Marvelous images of "Future Flight", as the caption 
says. Try to put all possible future aircraft designs into a set of fixed knobs! Here is a 
case where roles are split and merged with the greatest of ease. Visions of thefuture often 
feature these kinds of exciting "twists" on present ideas, full of novelty and considerably 
beyond trivial knob-twisting-yet even they usually fall far short of anticipating how the 
future really turns out. 

An entertaining use of knobs is in the new - movie genre called "Choice-a-Rama". 
The slogan says, "Where you decide what happens next!" Presumably, the audience votes 
at predetermined choice points, and this selects one pathway out of a predetermined set of 
possible continuations. It is like making dynamic choices at every possible turn while 
driving through a city, and being surprised by where one winds up. But it must be very 
expensive to have more than a few choice points, because the numbers multiply. If there 
are ten binary choice points, that means 210, or,. 1,024, different pathways have to be 
stored somewhere on film. It's an amazing, if decadent, symbol of our society. 

Metafont, Metamathematics and Metaphysics 


In conclusion, let me mention an inspiring use of knobs: in tactical nuclear 
weapons whose "yield" can be controlled. This is called, naturally enough, "dial- a- yield", 
in the same spirit as "dial-a-pizza" or "dial-a-prayer" services. Depending on your need, 
you can decide just how much of the enemy forces you wish to take out. A high setting 
has the appealing advantage of making a bigger "kill" (although one shouldn't use crude 
words like that) but the annoying disadvantage that it may trigger a similar or bigger 
nuclear retaliation on the part of the enemy, thus triggering the rapid slide down a 
slippery slope toward an all-out holocaust. Bother! All other things being equal, that's 
undesirable, so one is encouraged to use lower settings unless one is particularly peeved 
or impatient. After all, who wants to bring about Armageddon unnecessarily or 
prematurely? By gosh, don't knobs have the darndest uses? 

Metafont, Metamathematics and Metaphysics 


Section IV: 
Structure and Strangeness 


Section IV 

Structure and Strangeness 

Mathematical structures are among the most beautiful discoveries made by the 
human mind. The best of these discoveries have tremendous metaphorical and 
explanatory power, jumping across' discipline boundaries, illuminating many areas of 
thought simultaneously. In addition, the best discoveries often reveal truly bizarre 
facets of familiar concepts. In the following seven chapters, four wonderful 
mathematical ideas are considered. The "Magic Cube" is an engaging object for many 
reasons, not the least of which is its seeming physical impossibility, as well as the 
frustrating way that order and chaos appear and disappear on its surface as it is 
twisted. The borderline between order and chaos in mathematics is the next topic 
treated,- where we see the iteration of very simple functions giving rise to 
unexpectedly chaotic phenomena- in particular, "strange attractors". A strange 
attractor is a very peculiar shape having structure on an infinite number of scales at 
once. This property, applies not only to strange attractors, but to a much larger class 
of shapes known as "fractals". They in turn are examples of the more general 
mathematical concept of recursion, one of our era's most fruitful areas of exploration 
in mathematics and computer science. Recursion and recursivity are presented in three 
chapters on the computer language Lisp, the language used most in artificial 
intelligence research. Finally, we move from computers to their microscopic - 
substrate: the eerie netherworld of quantum phenomena, and the unresolved mysteries 
about the relationship between the macroworld and the microworld. 

Magic Cubology 



Magic Cubology 

March, 1981 

Cubitis magikia, n. A severe mental 
disorder accompanied by itching of the 
fingertips, which can be relieved only 
by prolonged contact with a 
multicolored cube originating in 
Hungary and Japan. Symptoms often 
last for months. Highly contagious. 

What this stuffy medical-dictionary entry fails to mention is that contact with 
the multicolored cube not only cures the itchiness but also causes it. Furthermore, it 
fails to point out that the affliction can be highly pleasurable. I ought to know; I have 
suffered from it for the past year and still exhibit the symptoms. 

Buvas Kocka— the Magic Cube, also known as Rubik's Cube-has 
simultaneously taken the puzzle world, the mathematics world, and the computing 
world by storm. (See Figure 14-1.) Seldom has a puzzle so fired the imagination of so 
many people, perhaps not since Sam Lloyd's famous "15" Puzzle, which caused mass 
insanity when it came out in the nineteenth century, and which is still one of the 
world's most popular puzzles. The 15 Puzzle and the Magic Cube are spiritual kin, the 
one being a two-dimensional problem of restoring the scrambled numbered pieces of 
a 4X4 square to their proper positions, and the other being a three-dimensional 
problem of restoring the scrambled colored pieces of a 3 X 3 X 3 cube to their proper 
positions. The solutions of both demand that the solver be willing to undo seemingly 
precious progress time and time again; there is no route to the goal that does not call 
for partial but temporary destruction of the visible order achieved up to a given point. 
If this is a-difficult lesson to learn with the 15 Puzzle, how much harder with the 
Magic Cube! And both puzzles have the fiendish property that well-meaning 
bumblers or cunning rogues can take them apart and put them back together in 
innocent-looking positions from which the goal is 

Magic Cubology 


(a) fbi 

FIGURE 14-1. A Magic Cube in (a) its pristine date, also called START; (b) a typical 
scrambled state. 

absolutely unattainable, thereby causing the would-be solver considerable 

This Magic Cube is much more than just a puzzle. It is an ingenious 
mechanical invention, a pastime, a learning tool, a source of metaphors, an 
inspiration. It now seems an inevitable object, but it took a long time to be discovered. 
Somehow, though, the time was ripe, because the ides germinated and developed 
nearly in parallel in Hungary and Japan and perhaps even elsewhere. A report 
surfaced recently of a French inspector general named Semah, who claims to 
remember encountering such a cube made out of wood in 1920 in Istanbul and then 
again in 1935 in Marseilles. Of course, without confirmation the claims seem dubious, 
but still titillating. In any event, Rubik's work was completed by 1975, and his 
Hungarian patent bears that date. Quite independently, Terutoshi Ishige, a self-taught 
engineer and the owner of a small ironworks near Tokyo, carne up with much the 
same design within a year of Rubik and filed for a Japanese patent in 1976. Ishige 
also deserves credit for this wonderful insight. 

Who is Rubik? Erno Rubik is a teacher of architecture and design at the 
School for Commercial Artists in Budapest. Seeking to sharpen his students' ability to 
visualize three-dimensional objects, he came up with the idea of a 3 X 3 X 3 cube any 
of whose six 3 X 3 faces could rotate about its center, yet in such a way that the cube 
as a whole would not fall apart. Each face would initially be colored uniformly, but 
repeated rotations of the various faces would scramble the colors horribly. Then his 
students had to figure out how to undo the scrambling. 

When I first heard the cube described over the telephone, it sounded like a 
physical impossibility. By all logic, it ought to fall apart into its constituent "cubies" 
(one of the many useful and amusing terms invented by "cubists" around the world). 
Take any corner cubie-what is it attached to? By imagining rotating each of the three 
faces to which it belongs, you can see that the corner cubie in question is detachable 
from each of its three 

Magic Cubology 


edge-cubie neighbors. So how in the world is it held in place? Some people postulate 
magnets, rubber bands, or elaborate systems of twisting wires in the interior of the 
cube, yet the design is remarkably simple and involves no such items. 

In fact, the Magic Cube can be disassembled in a few seconds (see Figure 14- 
2c), revealing an infernal structure so simple that one has to ponder how it can do 
what it does. To see what holds it together, first observe that there are three types of 
cubie: six center cubies, twelve edge cubies, and eight corner cubies. (See Figure 14- 
2a.) Each center cubie has only one "facelet"; edge cubies have two; corner cubies 
have three. Moreover, the six center cubies are really not cubical at all-they are just 
square facades covering the tips of axles that sprout out from a sixfold spindle in the 
cube's heart. The other cubies, however, are nearly complete little cubes, except that 
each one has a blunt little "foot" reaching toward the middle of the cube, and some 
curved nicks facing inward. 

The basic trick is that cubies mutually hold one another in by means of their 
feet, without any cubie actually being attached to any other. Edge cubies hold corner 
cubies' feet, corner cubies hold edge cubies' feet. Center cubies are the keystones. As 
any layer, say the top one, rotates, it holds itself together horizontally, and is held in 
place vertically by its own center and by the equatorial layer below it. The equatorial 
layer has a sunken circular track (formed by the nicks in its cubies) that guides the 
motion of the upper layer's feet and helps to hold the upper layer together. Unless 
you're a mechanical genius, you really can't understand this without a picture, or, 
better yet, the real thing. 

In his definitive treatise, Notes on Rubik's "Magic Cube, David Singmaster, 
professor of Mathematical Sciences and Computing at the Polytechnic of the South 
Bank in London, defines the basic mechanical problem as that of figuring out how the 
cube is constructed. I sometimes wonder whether Rubik's intended visualization task 
for his students was to solve the unscrambling problem (Singmaster calls it the basic 
mathematical problem) or to solve the mechanical problem. I suspect the latter is the 
harder of the two. I myself must have put in more than 50 hours of work, distributed 
over several months, before I solved the unscrambling problem, and I never did solve 
the mechanical problem until I saw the cube disassembled. Singmaster informally 
estimates that people who eventually solve the unscrambling problem (without hints) 
take, on the average, two weeks of concentrated effort. Of course, it is hard for 
anyone who has done it to say exactly how long it took (how can you tell play from 
work?), but it's safe to say that if you are destined to solve the unscrambling problem 
at all, it will take you somewhere between five hours and a year. I trust this is 

An important fact that many people fail to appreciate at first is that to restore a 
scrambled cube even once to the START position (the state of Perfect Enlightenment 
and Grace, where each face is a solid color) is so hard that it is necessary to find a 
general algorithm for doing it from any scrambled state. No one can restore a messed- 
up Magic Cube to its pristine state by 

Magic Cubology 



FIGURE 14-2. In (a) the three types of cubie are identified: face centers (F), corners 
(C), and edges (E). In (b) the mechanism is revealed You can see the six pronged 
infernal spindle with all six face-center cubies attached to it, and one detached edge 
cubie,- and one detached comer cubie. Notice that no cubie,- is a complete cube. In 
fact, the face centers are just fagades! In (c), the gradual dismantling and rebuilding 
of a Cube are shown. Warning: If you follow this procedure, you are advised to 
rebuild your Cube in its pristine state,* otherwise, you will probably wind up with 
your Cube in an orbit from which START is inaccessible 

Magic Cubology 


mere trial and' error'. Anyone who gets back to START has built up a small science. 
A word of warning: Proposed solutions to the mechanical problem are often 
lacking in clarity, having either too much or too little detail.- It is certainly a challenge 
to come up with a mechanism that has the multifaceted twistability of the Magic 
Cube, but it is perhaps no less of a challenge to describe the mechanism in language 
and diagrams that other people can readily comprehend. By the same token, to convey 
algorithms that restore the cube to START calls for a good, clear notation. Singmaster 
himself has an excellent notation that is now considered standard; I will present it 
below. A second word of warning: I am not a "cubemeister" (one who has contributed 
to the annals of the profound science of Cubology); I am a mere cubist, an amateur 
dazzled by the Cube and by the virtuosos who have mastered it. Therefore I am not a 
suitable recipient of novel solutions to the mechanical problem or to the unscrambling 
problem. I recommend to readers who believe that they have some novel insight to 
communicate it to Singmaster, who runs what amounts to the World Center for 
Cubology. lis address is: Department of Mathematical Sciences and Computing, 
Polytechnic of the South Bank, London SEI OAA, England. 

* * * 

By now, I would hope that your appetite has been whetted to the point where 
immediate possession of a Magic Cube is an urgent priority. Fortunately, this can be 
arranged quite easily. Most any toy store now carries them under such names as 
"Rubik's Cube", "Wonderful Puzzler", and miscellaneous others. The price ranges 
from a couple of dollars for a cheap model to roughly $15 for a very solid and high- 
quality cube. It is likely that many people will-buy cubes, little suspecting the 
profound difficulty of the "basic mathematical problem". They will innocently turn 
four or five faces, and suddenly find themselves hopelessly lost. Then, perhaps 
frantically, they will begin turning face after face one way and then another, as it 
dawns on them that they have irretrievably lost something precious. When this first 
happened to me, it reminded me of how I felt as a small boy, when I accidentally let 
go of a toy balloon and helplessly watched it drift irretrievably into the sky. 

It is a fact that the cube can be randomized with just a few turns. Let that be a 
warning to the beginner. Many beginners try to clave their way back to START by 
first getting a single face dope. Then, a bit stymied, they leave their partially solved 
cube lying around where a friend may spot it. The well-known "Don't touch it!" 
syndrome sets in when the friend innocently picks it up and says, "What's this?" The 
would-be solver, terrified that all their hard-won progress will be destroyed, shrieks, 
"Don't touch it!" Ironically, victory can come only through a more flexible attitude 
allowing precisely that destruction. 

Magic Cubology 


For the beginner, there is awesome sense of 'irreversibility about destroying 
START, a fear of tumbling off the edge of a precipice. When my own first cube (1 
now have dozens) was first messed up (by a guest), I felt both relieved (because it was 
inevitable) and sad (because I feared START was gone forever). The physicist in me 
was reminded of entropy. Once START had become irretrievable, each new twist of 
ore face or another seemed irrelevant. To my naive eye there, was no distinguishing 
one messed-up state from another, just_ as to the naive eye there is no distinguishing 
one plate of spaghetti from another, one pile of fall leaves from another, and so on. 
The details meant nothing to me, so they didn't register. As I performed my "random 
walk", the vastness of the space of possible shufflings of the little cubies became 

As with a deck of cards, one can calculate the exact number of possible 
rearrangements of the cube. An initial estimate would run this way. The first 
observation-a rather elementary one-is that on the rotation of any face, each comer 
goes to another comer, each edge to another edge and the center of the face stays put 
(except for its invisible rotation). Therefore corners mix only with their own kind, and 
the same goes for edges. There are eight comer cubies and eight comer cubicles (the 
spatial niches, regardless of their content). Cubies and cubicles are to the cube as 
children and chairs are to the game of musical chairs. Each comer cubie can be 
maneuvered into any of the eight comer cubicles. This means that we have eight 
possible fillers for cubicle No. 1, seven for cubicle No. 2, six for cubicle No. 3, and so 
on. Therefore the corners can be placed in their cubicles in 8X7X6X5X4X3X2 
X 1 (= 81) different ways. But each corner can be in any one of three orientations. 
Thus one would expect a further factor of 38 from the eight corners. One would 
expect the same for the twelve edge cubies: twelve objects can be permuted among 
themselves in 12! different ways, and then, since each of them has-two possible 
orientations, that gives another factor of 212. The center cubies never leave their 
START positions (unless the cube is rotated as a whole) and have no visibly distinct 
orientations, so they do not contribute. If we multiply the numbers out, we get 
519,024,039,293,878,272,000 possible positions-about 5.2 X 1020. 

But there is an assumption here: that any cubie can be gotten into any cubicle 
in any orientation, regardless of the other cubies" positions and orientations. As we 
will see, this is not quite the case. It turns out that there is a mild constraint on the 
orientation of the comer cubies: any seven can be oriented arbitrarily, but the last one 
is then forced, thus removing-one factor of three. Similarly, there is a mild constraint 
on edge cubies: of the twelve, any eleven can be oriented arbitrarily, but the last one is 
then determined, so that another factor of two is removed. There is one final 
constraint on the permutations of cubies (disregarding their orientations) that says you 
can place all but two of them wherever you want, but the last two are forced. This 
removes a final factor of two, reducing the estimate above by a total factor of 3 X 2 X 
2=12, bringing the possibilities down to 

Magic Cubology 


a mere 48,252,003,27 1;489,856,000— aboul-4 3 X 10 iy Still, v it must be said , this 
does slightly exceed the assertion on Ideal's label: "Over three billion combinations". 

Another way of thinking about this factor of twelve is that if you begin at 
START, you are limited to a twelfth of the "obvious" states, but if you disassemble 
your cube and reassemble it with a single comer cubie twisted by 120 degrees, you are 
now in a formerly inaccessible state, from which a whole family of 
43,252,003,274,489,856,000 new states is accessible. There are twelve such 
nonoverlapping families of states of the cube, usually called orbits by group theorists. 

* * * 

Speaking of impossible twists, I would like to mention a lovely discovery in 
Cubology that is parallel to ideas in particle physics. It was pointed out by 
mathematician Solomon W. Golomb. The discovery states: It is impossible to find a 
sequence of moves that leaves just one comer cubie twisted a third of a full turn and 
everything else the same. Now, recalling the famous hypothetical fundamental 
particle with a charge of + 1/3 and its antiparticle with a charge of -1/3, Golomb calls 
a clockwise one-third twist a quark and a counterclockwise- one-third twist an 
antiquark. Like their cubical namesakes, quark particles have proved to be 
tantalizingly elusive, and particle physicists generally believe now in quark 
confinement: the notion that it is impossible to have an isolated free quark (or 
antiquark). This correspondence between cubical quarks and particle quarks is a 
lovely one. 

Actually, the connection runs even deeper. Although quark particles cannot 
exist free, they can exist bound together in groups: a quark- antiquark pair is a meson 
(Figure 14-9e), and a quark trio with integral charge is a baryon. (An example is the 
proton-qqq-with a charge of + 1.) Now in the Magic Cube, amazingly enough, it is 
possible to give any two comer cubies one-third twists, provided they are in opposite 
directions (one clockwise, the other counterclockwise). It is also possible to give any 
three comer cubies one-third twists, provided they are all in the same -direction. Thus 
Golomb calls a- state with two oppositely twisted corners a "meson", and one with 
three corners twisted in the same direction a "baryon". In the particle world, only 
quark combinations with an integral amount of charge can exist. In the cubical world, 
only quark combinations with a integral amount of twist are allowed. This is just 
another way of saying that the orientation of the eighth comer cubie is always forced 
by the first seven. In the cubical world, the underlying reason for "quark confinement" 
lies in the group theory. There may be a closely related group=theoretical explanation 
for the confinement of quark particles. That remains to be seen, but in any event, the 
parallel is provocative and pleasing. 

* * * 

Magic Cubology 


If we have a "pristine cube" (one=in START); what kind of move sequence 
will create a meson or a baryon? Here we have an example of the most powerful idea 
in Cubology: the idea of "canned" move sequences that accomplish some specific 
reordering of a few cubies, leaving everything else untouched ("invariant", as group 
theorists say). There are many different terms for such canned move sequences. I have 
heard them called operators, transforms, words, tools, processes, maneuvers, routines, 
subroutines, and macros, the first three being group-theoretical terms and the last 
three being borrowed from computer science. Each term has its own flavor, and I find 
that I use them all at various times. 

In order to talk about processes, we need precision, and that means a good technical 
notation. I will therefore present Singmaster's notation now. First we need a way of 
referring to any particular face of the cube. One possibility is to use the names of 
colors as the names of the faces, even after the cubies have become mixed up. Now it 
might seem that calling a face "white" would be meaningless if white is scattered all 
over the place. But remember that the white center cubie never moves with respect to 
the five other center cubies, and thus defines the "home face" for white. So why not 
use color names for faces? Well, one problem is that different cubes come with their 
colors arranged differently. Even two cubes from one manufacturer may have 
different START positions.. A more general convention is to refer to faces simply as 
left and right, front and back, and top and bottom. Unfortunately, the initials of "back" 
and "bottom" conflict. Singmaster resolves the conflict by replacing "top" and 
"bottom" by up and down. Now we have names for the six faces: L, R, F, B, U, D. 
Any particular cubie can be designated by lowercase italic letters naming the faces it 
belongs to. Thus ur (or ru) stands for the edge cubie on the right side of the top layer, 
and urf for the corner cubie in front of it (see Figure 14- 3 a). 

The most natural move for a right-handed cubist seems to be to grasp the right face 
with the thumb pointing up along the front face and to move the thumb forward. Seen 
from the right side, this maneuver causes a clockwise quarter-twist of the R face. This 
move will be designated R (see Figure 14-3b) 

The mirror-image move, where the left hand turns the L side counterclockwise (as 
seen from the left), is L-t, or, for short, L'. A clockwise twist of the L side is called, 
naturally, L. A 90-degree clockwise turn of any face (from the point of view of an 
observer looking at the center of that face) is named by the letter for that face, and its 
inverse-the counterclockwise quarter turn —has a prime mark following the face's 
initial. Quarter-turns will henceforth be called q-turns. 

With this nomenclature, we can now write down any move sequence, -no matter how 
complex. A trivial example is four successive R's, which we write as R4. In the 
language of group theory, this is the identity operation: it has zero effect. An equation 
expressing this fact is R4 = I. Here, I stands for the "action" of doing nothing at all. 
Suppose we twist two different faces-say R first, then U. We will 

Magic Cubology 


Magic Cubology 


transcribe that as RU — nor as UR Note; in fact, that RU and UR are quite different in 
their effects. To check this out, first perform RU on a pristine cube, observe its 
effects, then undo it, try UR, and see how its effects differ: The inverse of RU is, quite 
obviously, UR', not RU'. (Incidentally, this strategy of experimenting with move 
sequences on a pristine cube is most helpful. Very early I found it useful to buy a 
second cube so that I could work on solving one while experimenting with the other, 
never letting the second one get far away from START) 

* * * 

What is the effect of a particular "word"? That is to say, which cubies move 
where? To answer this question, we need a notation for the motions of individual 
cubies. The effect of R on edges is to carry the ur cubie around to the back face to 
occupy the br cubicle. At the same time, the br cubie swings around underneath, 
landing in the dr position, the dr cubie moves up like a car on a Ferris, wheel to fill 
theft cubicle, and theft cubie comes to the top at ur. (See Figure 14-4a.) This is called 
a 4-cycle, and we'll write it in a more compact way: (urbrdrfr). Of course, it does not 
matter where we start writing; we could equally well write (brdrfr,ur). 

On the other hand, the order of the letters in cubie names does matter. We can 
reverse all of them or none of them, but not just some of them. If you think of the 
letters as designating facelets, this will become clear. For example, if we wrote 
(urrb,dr,rf), it would represent a 4-cycle involving the same four cubicles as above, 
"but one in which each cubie flipped before moving from one cubicle to the next. Of 
course, such a cycle cannot be accomplished by a single q-turn, but it may be the 
result of a sequence of q-turns of different faces (an operator). Or consider the 
following 8-cycle, shown in Figure 14-4c: (uruf ul,ub,rufu,lu,bu). This has length 
eight, but involves only four cubicles. Each cubie, after making a full swing around 
the top face, comes back flipped (see Figure 14-4b). After two full swings, it is back 
as it started. Each facelet has made a "Mobius trip". We can designate this "flipped 4- 
cycle" as '(uruf ul_ub)+, where the plus sign designates the flipping. The designation 
(ru,fu,lu,bu)+ and numerous others would do as well. Thus the cycle notation tells you 
not only where a cubie moves but also its orientation with respect to the other cubies 
in its cycle. 

To complete our description of the effect of R, we must transcribe the 4-cycle 
of the corners. As with edges, we have the freedom to start at any corner we want, and 
once again we must be careful to keep track of the facelets so that we get the 
orientations right. Still, R has a rather-trivial effect on corners: (urf bru,drb,frd), 
which could also be written (rub,rbd,rdf rfu), and many other ways. Summing up, we 
can write R = (urbrdr r) (urf,bru,drb,frd). This says that R consists of two disjoint 4- 
cycles. (If we wanted to, we could throw in a term standing for the 90-degree rotation 
of the R face's center, but since such rotation is invisible, we needn't do so.) 

Magic Cubology 



(a) (b) 


FIGURE 14-4. The simple 4-cycle (ur,br,dr,rr), shown in (a), is what happens to edge 
cubies during the q.turn R. In (b), a trickier 4-cycle (ur,rb,dr,rf), involving the same 
four cubies, is shown; here, each cubie flips before entering the next cubicle. This 
cycle can be produced only through a sequence of q-turns. In (c), the 8-cycle (fi- 
,ur,br,dr,rf ru,rb,rd) is shown, which can also be thought of as a flipped 4-cycle' - 
namely, (fr,ur,br,dr)+. In (d), the 7-cycle (ur,br,dr, fr,uf,ul,ub) is shown snaking its 
way around the Cube, representing the. effect on edges of the simple operator RU . 

What about transcribing a move sequence such as RU? Well, take a pristine 
cube and perform RU. Then start with some arbitrary cubie that has moved and 
describe its trajectory. For example, ur has moved to br. Therefore br has been 
displaced. Where has it gone? Find the new location of that cubie (it is dr) and 
continue chasing cubies 'round and 'round the cube until you find the one that moved 
intoo the original position of ur. You will find the following 7-cycle: (ur,brdr fr,uf 
ul,ub) (see Figure 14-4d). 

What about corners? Well, suppose we trace the cubie that originated in 

Magic Cubology 


where did RU carry it? The answer is: Nowhere- it took a round trip but got twisted 
along the way. It changed into rfu. We can designate this clockwise twist-this "twisted 
unicycle", this quark-as (urf)+. This is shorthand for the following 3-cycle: (urf,rju 
fur). You can even see this as cycling the three letters u, r, and f inside the cubie's 
name. If the cycle had been an antiquark, we would have written (urf) -, and the 
letters would cycle the other way. 

What about the other seven corners? Two of them-dbl and d4(-stay put, . and 
the other five almost form a 5-cycle: (ubrbdrdfr,luf,bul). It is unfortunate that the 
cycle does not quite close, because bul, although it gets carried into the original ubr 
cubicle, does so in a twisted manner. It gets carried to rub, which is a 
counterclockwise twist away from ubr. This means we are dealing with a 1 5-cycle. 
But it is so close to the 5-cycle above that we'll just tack on a minus sign to represent 
the counterclockwise twist. Our twisted 5-cycle is then (ubr,bdr,dfrluf bul)_, and the 
entire effect of RU, expressed in cycle notation, is (ur,brdrfr,uf,ul,ub) (urf)+ 

Now that we have RU in cycle notation, we can perform rotations mentally, by sheer 
calculation. For instance, what would be the effect of (RU)s"? Edge cubie ur would be 
carried five steps forward along its cycle, which would bring it to ul. (This can also be 
seen as moving two steps backward.) Then ul would go to fr and so on. The 7-cycle is 
replaced by a new 7-cycle: (ur,ul fr,brub,uf dr). Let us now look at the twisted 5- 
cycle. Corner cubie ubr would be carried five steps forward along its cycle, which 
brings it back to itself negatively twisted-namely, rub. Similarly, all the corner cubies 
in the 5-cycle would return to their starting points, but negatively twisted; thus, on 
being raised to the fifth power, a negatively twisted 5-cycle becomes five antiquarks. 
But if that is so, . how is the requirement for integral twist satisfied? Don't we have 
one quark-(urf)+and five antiquarks, and doesn't that add up to four antiquarks, with a 
total twist of -1 3? Well, I have slipped something by you here. Can you spot it? 
o gain facility with the cycle notation, you might try to find the cycle representation of 
various powers of RU and UR and their inverses. 

* * * 

Any sequence of moves can be represented in terms of disjoint cycles of various 
lengths (cycles with no common elements). If you are willing to let cycles share 
members, however, any cycle can be further broken up into -cycles (called 
transpositions, or sometimes swaps). For instance, consider three animals: an 
Alligator, a Bobcat, and a Camel. They initially occupy three ecological niches: A, B, 
and C (see Figure 14-5). The effect of the 

Magic Cubology 


FIGURE 14=5. A zoological 3 -cycle involving three objects: a, b, and c (an alligator, 
a bobcat, and a camel). Initially, each is in its usual ecological niche: a in A, b in B, 
and c in C. But then, after a permutation, c is in A, a is in B, and b is in C. This 3- 
cycle can bethought of as the result of two successive swaps. 

Magic Cubology 


3-cycle, (A;B,C) is to put them' in the order Camel, Alligator, Bobcat. 'The same 
effect can be achieved, however, by first performing the swap (A,B) (what was in A 
goes to B and vice versa) and then performing (A, C). Of course, this can also be 
achieved by the two successive swaps (A,C)(B,C)or, for that matter, by (B,C)(A,B). 
On the other hand, no sequence of three swaps will achieve the same effect as 
(A,B,C). Try it yourself and see. (Note ,that a niche is like a cubicle and an animal is 
like a cubie.) 

An elementary theorem of zoop theory (a field we won't go into here) states 
that no matter how a given permutation of animals among niches is reduced to a 
product of successive swaps (which can always be done), the parity of the number of 
such swaps is invariant; that is to say, a permutation cannot be expressed as an even 
number of swaps one time and an odd number another time. Moreover, the parity of 
any permutation is the sum of the parities of any permutations into which it broken up 
(using the rules for addition of even and odd numbers: odd plus even is odd, and so 

Now, this theorem has repercussions for the Magic Cube. In particular, you 
can see that any q- turn consists of two disjoint 4-cycles (one on edges and one on 
corners). What is the parity of a 4-cycle? It is odd, as you can work out for yourself. 
Thus, after' one q- turn, both the edges and the corners 'have been permuted oddly; 
after two q- turns, evenly; after three q-turns, oddly; and so forth. The edges and 
corners stay in phase, in the sense that the parities of their permutations are identical. 
Now clearly, the null permutation is even (it effects zero swaps). So if we have a null 
permutation on corners, the permutation on edges must also be even. Conversely, a 
null permutation on edges implies an even permutation on corners. Imagine a state 
identical to START except for two interchanged edges (that is, one swap).. Such a 
state would be even in corners but odd in edges, hence impossible. The best we could 
do would be to have two pairs of interchanged edges. The same argument holds for 
corners. In 'short, we have proven that single swaps are impossible; swaps must 
always come in pairs. (This is the origin of one of those factors of two in the earlier 
calculation of the number of reachable states of the cube.) There are processes for 
exchanging two pairs of edges, two pairs of corners, and even for exchanging one pair 
of edges along with one pair of corners. (This last process necessarily involves an odd 
umber of q- turns.) 

To round out the subject of constraints, let us ponder the origin of the 
constraints on corner-twisting and edge-flipping. Here is a clever explanation 
provided by John Conway, Elwyn Berlekamp, and Richard Guy, elaborating an idea 
due to Anne. Scott. The basic concept is that we want to show that the number of 
flipped cubies is always even, and that the twist is always integral. But in order to 
determine what is flipped and what is twisted, we need a frame of reference. To 
supply it, we will define two notions: the chief facelet of a cubicle and the chief color 
of a cubie. (Remember that a cubicle is a niche and a cubie is a solid object.) The 
chief facelet of a cubicle will be the one on the up or down surface of the cube, if that 

Magic Cubology 


FIGURE 14-6. Diagrams to aid in the proof that flippancy is even and twist is 

In (a), the Cube is in START. The chief facelets of cubicles are shown by 
crosses and the chief colors of cubies by circles. (Note: The concept of "chiefness" 
does not apply to face-center cubicles or rubies.) Think of the crosses as floating in 
space and the circles as being attached to the Cube,, so that when turns are made, the 
crosses stay where they are but the circles move. The bottom face looks identical to 
the top, the left face identical to the right, and the back face identical to the front. 

In (b), the results of the q-turn F are shown. The two empty circles indicate 
that the two rubies they are attached to have lost their "sanity" : For them to regain 
their sanity, one cubie would have to be twisted one-third clockwise while the other 
cubie was twisted one-third counterclockwise, thus canceling each other's 
contribution to the total twist of the Cube. Similar remarks apply to the invisible left- 
hand face. 

In (c), the results of the q-turn R (as applied to START) are shown. Empty 
circles again come in pairs. The top and bottom corner rubies on the front face (each 
with an empty circle) have canceling twists, as in (b). The top and bottom edge rubies 
on the tight face have canceling flippancies, and the one seemingly unmatched empty 
circle (on the edge cubic on the front face) is paired with an empty circle on the 
invisible back face. 

is one; otherwise it will be the one on the left or right wall (see Figure 14-6). There 
are nine chief facelets on U, nine on D, and four on the equator. (We - can ignore the 
centers, because they never can be flipped or twisted.) The chief color of a cubic is 
defined as the color that should be on the cubie's chief facelet when the cubie "comes 
home" to its proper cubicle in the START position. 

Now the argument goes this way. Suppose the cube is scrambled. Any, cubie 
that has its chief color in the chief facelet of its current cubicle will be called sane; 
otherwise it will be called flipped (this applies to edge rubies) or twisted (this to 
comer cubies). Obviously, there are two ways a cubie can, twisted: clockwise (+ 1/3 
twist) and counterclockwise (-1/3 twist). The flippancy of a cube state will be defined 
as the number of flipped edge cubies in it, and the twist as the sum of the twists of the 
eight corner rubies We shall say that the flippancy and twist of START are both zero, 
by convention. 

Next consider the twelve possible q- turns out of which everything else is 
compounded. Performing U or D (or their inverses) preserves both the 

Magic Cubology 


flippancy and the twist, since nothing leaves or enters the up or down face. 
'Performing F or B (or their inverses) leaves the total twist constant, by changing the 
twist of four corners at once: two by + 1/3 and two by -1/S'. It also leaves the 
flippancy alone (see Figure 14-6b). Performing L or R will likewise leave the total 
twist constant (four corner twists again cancel in pairs) and will change the flippancy 
by 4, since always four cubies will .change in flippancy (see Figure 14-6c). The 
conclusion is what I stated above without proof: the eight corner cubies are always 
oriented to make the total twist a whole number, and the twelve edge cubies must 
always be oriented to make the total flippancy even. 

* * * 

After this discussion of constraints, you should be convinced that no matter 
how you twist and turn your Magic Cube, you cannot reach more than a twelfth of the 
conceivable "universe", beginning at START It is another matter, though, to show 
that every state within that one-twelfth universe is accessible from START (or what 
amounts to the same thing, only backward: that START is accessible from every state 
in the one-twelfth universe). For this, we need to show how to achieve all even 
permutations of cubies, and how to achieve all orientations that do not violate the two 
constraints described above. What it comes down to is that we have to show there are 
operators that will perform seven classes of operations: 

(1) an arbitrary double edge-pair swap, 

(2) an arbitrary double corner-pair swap, 

(3) an arbitrary two-edge flip, 

(4) an arbitrary meson, 

(5) an arbitrary 3-cycle of edges, 

(6) an arbitrary 3-cycle of corners, and 

(7) an arbitrary baryon. 

Of course, each of these operators should work -without causing side effects 
on any other parts of the cube. With these powerful tools in our kit, we would be able 
to cover the one-twelfth universe without any trouble. In the case of the overlapping 
swaps of animals, you saw how a 3-cycle is really two overlapping 2-cycles. This 
implies that classes 5 and 6 can be made out 14 the first four classes. Similarly, a 
baryon can be made from two overlapping mesons. -So all we really need is the first 
four classes. 

To show that all the operators belonging to these four classes are available, 
we'll use another of the most crucial and lovely ideas of Cubology: that of conjugate 
elements. It turns out that all we need is one example in each class; given one 
example, we can construct all the other operators of its class from it. How does this 
work? The idea is very simple. 

Suppose we had found one operator in class 1 that swapped, say, of with 

Magic Cubology 


FIGURE 14-7. How to use conjugate moves to turn an unsolved problem into a solved 
one. The unsolved problem is to effect the double swap shown by the white arrows. 
The solved problem is the double swap shown by the black arrows (on top). As long 
as we can maneuver the black cubies into the white cubicles, we are home free. This 
principle has nothing to do with the specific cubicles involved in the known and 
unknown operators, but simply with the idea that sometimes you can translate an 
unsolved situation into a solvable situation, use a known operator to handle that 
situation, then "back-translate " to regain the original situation,, but with the tricky 
part now solved. This is the principle of conjugates. 

ub, and ul with ur, leaving the rest of the cube undisturbed. Let us call this operator H. 
Now suppose we wanted to swap two totally different pairs of edge cubies, say ft with 
fd, and rb with rd (see Figure 14-7). We can daydream: "If only those cubies were in 
the four "magical swapping spots' on the top surface..." Well, why not just put them 
up there? It would be fairly simple to get four cubies into four specific cubicles. The 
obvious objection is: "Yes, but that would have an awful side effect-it would totally 
mess up the rest of the cube." But there is a clever retort. Let the destructive maneuver 
that gets those four cubies into the magical swapping spots be called A. Suppose we 
were smart enough to transcribe the move sequence of A. Then right after performing 
A; we perform our double swap H. Now comes the clever part. Reading our transcript 
in reverse order and inverting each q-turn, we perform the exact inverse of A. This 
will not only un-maneuver the four cubies back into their old cubicles, but will also 
undo the side effects A created in the rest of the cube. Does that restore the cube 
intact? Not quite. Remarkably, since we sandwiched H between A and A', the four 
edge cubies go home permuted-that is, each one winds up in the home of its swapping 
partner! Other than that, the cube is restored,' and so we have accomplished precisely 
the double swap we set out to accomplish. 

When you think this through, you see that it is flawless in conception. The 
inverse maneuver, A', does not "know" we have exchanged two pairs of edges. As far 
as it is concerned, it is merely putting -everything back where it was before A was 
executed. Hence we have "snuck" our swaps in under A"s nose, which is to say we 
have "fooled the cube". Symbolically, we have 

Magic Cubology 


carried out the sequence of moves AHA', which is called a conjugate of H. 

It is this kind of marvelously concrete illustration of an abstract notion of 
group theory that makes the Magic Cube one of the most amazing things ever 
invented for teaching mathematical ideas. Normally, the examples of conjugate 
elements given in group-theory courses are either too trivial or too abstract to be 
enlightening or exciting. The Magic Cube, though, provides a vivid illustration of 
conjugate elements and of many other important concepts of group theory. 

* * * 

Suppose you wanted to get a quark- antiquark pair on opposite corners, but 
knew how to do so only on adjacent corners. How could you do it? Here is a hint: 
There are two nice solutions, but the shorter and prettier one involves using a 
conjugate. Incidentally, any maneuver that creates a quark on one corner (with other 
side effects, of course) might be called a quarks crew. 

What we have shown for edges goes also for corners: the ability to swap two 
specific corners enables you to swap any two corners. Conjugation allows you to 
build up an entire class of operators from any single member of that class. Of course, 
the question still remains: How do you find some sample operator in each of the four 
classes? For example, how do you find an operator that creates a meson on two 
adjacent corners (a combination of a quarkscrew and an antiquarkscrew)? How do 
you find an operator that exchanges two edge pairs both of which are on the top 
surface? I won't give the answer here, but will follow Singmaster, who points the way 
by suggesting quasi-systematic exploration of some small "subuniverses within the 
totality of all cube states-that is, he suggests you look at subgroups. This means 
restricting your set of moves deliberately to some special types of move. Here are a 
few examples of interesting subgroups created by various kinds of restriction: 

1. The Slice Group. In this subgroup, every turn of one face must be 
accompanied by the parallel move on the opposing face. Thus RR must be 
accompanied by L', U by D', and F by B'. The name comes from the fact that 
any such double move is equivalent to rotating one of the three central slices of 
the cube. Singmaster abbreviates the slice move RL' by R5, RL by Rs, and so 
forth, Under this restriction, faces cannot get arbitrarily scrambled. Each face 
will have a pattern in which all four corners share one color (Figure 14-8). A 
special- case is the pattern called Dots, in which each face is all one color 
except for its center (see Figure 14-9a). Can you figure out how to achieve Dots 
from START? How many different ways are there of arranging the -dots? How 
does the Dots pattern resemble a meson? (You will find answers to all these 
questions, along with much else, in Singmaster's book.) 

Magic Cubology 


FIGURE 14-8. The type of pattern that the Slice Group creates on all faces. 

2. The Slice-Squared Group. Here we restrict the Slice Group further, 
allowing only squares of slice moves, such as R52 (which is the same as R2L) 
or F52 (which is the same as F2B2). 

3. The Antislice Group. Here, instead of always rotating opposing faces 
in parallel, we always rotate them in antiparallel, so that R is accompanied by L, 
F by B, and U by D. An antislice move has a subscript a, as in R., which equals 
RL. (Of course, the Antislice-Squared Group is no different from the Slice- 
Squared Group.) 

4. The Two-Faces Group. Allow yourself to rotate only two adjacent 
faces, say F and R. It turns out to be a pretty substantial challenge to figure out 
an algorithm for undoing an arbitrary scrambling of two faces, staying within 
the Two-Faces Group. Most cube experts will instead resort to the "elephant 
gun" of twisting all six faces to get out of a mere two-face scramble. Shame on 

5. The Three-Faces Groups. The reason this category is pluralized is that 
there are nonequivalent choices of threesomes of faces. For example, you can 
form a kind of "bridge", as with faces L, U, and R, or you can form a "corner", 
as with faces F, U, and R. 

6. The Four-Faces and Five-Faces Groups. Again, there are various 
non-equivalent choices of faces for four faces. The Five-Faces Group is, as it 
turns out, actually the full group of the cube. In other words, you can make an 
operator equivalent to R out of L, U, D, F, and B. 

7. -The Two-Squares Group. As in the Two-Faces Group, you may 
rotate only two faces, using only 180-degree turns at that. This is a very simple 

If you limit your attention to just the Two-Faces and Two-Squares groups, you 
will be able to find processes that achieve double swaps-some of edges, others of 
corners. It is a remarkable fact that these processes alone, together with the notion of 
conjugation, will allow us-in a theoretical sense-to solve the entire unscrambling 

Why don't we also need a meson maker and, a double edge-flipper? Well, 

Magic Cubology 


consider how we might make a double edge flipper from the two classes of tools one 
may assume will be found-that is, double edge-swappers and double corner- swappers. 
In order to flip two edges without creating any side effects, we'll perform two 
successive double edge-pair swaps, and both times they will involve the same pairs! 
For example, we might swap of with ub, and df with db, and then reswap them. This 
seems to be an absolute "nothing process", but that need not be the -case. After all, 
just as before, we can sandwich the second swap between a process X and its inverse 
X', where we carefully choose the process X so as to... (Oh, darn it all, I totally lost 
my train of thought there. I'm sure you can finish it up, though. I do remember that it 
wasn't too tricky, and that I thought the idea was rather elegant. I'm sure you will too.) 

The same kind of thinking will show how you can build up a meson maker out 
of mere corner- swapping processes and conjugation. Given mesons, you can build up 
baryons. And with mesons and baryons, double edge-flippers, double edge -pair 
swappers, and double corner-pair swappers, you have a full kit of tools with which to 
restore any scrambled cube to START, as long as it belongs to the same orbit as 
START. What I have given is, needless to say, a highly theoretical existence proof, 
and any practical set of routines would be organized quite differently. The type of 
solution I have described has the advantage of being compact in description, but it is 
enormously inefficient.-In practice, a cube solver must develop a fairly large and 
versatile set of routines that are short, easy to memorize, and highly redundant. There 
is an advantage to being able to carry out transformations in a variety of ways: you 
can choose whichever tool seems best adapted to the situation at hand, instead of, for 
instance, using some theoretically developed tool that takes several hundred q-turns to 
make a baryon. 

* * * 

The typical cube solver evolves a set of transforms partly by intuition, partly 
by luck, sometimes with the aid of diagrams, and occasionally with abstract principles 
of group theory. One principle nearly everyone formulates quite early is that of 
"getting things out of the way". This is once again the idea of conjugates, only in a 
simpler guise. The typical patter that goes along with it is something like this (I have 
included sound effects of -a sort): "Let's see, I'll swing this out of the way [flip, flip] 
so that I can move that [flap, flap], and now I can swing this back again [unflip, 
unflip]. There -now I've got that where I wanted it to be." You can hear the conjugate 
structure inside the patter ("flip, flap, unflip"). 

The only problem with being conscious of why it all works as you carry it out 
is that it may be too taxing. My impression is that most cubemeisters do not think in 
much detail about how their tools are achieving their goals, at least not while they are 
in the midst of restoring some scrambled cube. Rather, expert cube solvers are like 
piano virtuosos who have memorized 

Magic Cubology 


difficult pieces. As Dan' Weise, an -MIT cubemeister, said to me, e forgotten how to 
solve the cube, but luckily, my fingers remember." 

The average operator seems to be about ten to twenty q- turns long. You don't 
ever want to get lost in mid-operator, because if you do, you will have a totally' 
scrambled cube on your hands, even if you were carrying out your final transform. As 
cubemeister Bernie Greenberg said to me once, "If I were solving a cube and 
somebody yelled v Fire!', I would finish my transform before clearing out." 

My own style is probably overly blind. Not only do I not think about why my 
operators work as I am carrying them out; I have to admit that with some of them, I 
don't even have the foggiest idea why they work at all! I found these "magic 
operators" through a long and arduous trial- and-error procedure. I used some heuristic 
notions, such as: "Explore various powers of simple sequences", "Use conjugates a 
lot", and so on. One thing I hardly used at all-alas, poor Rubik-was three-dimensional 
visualization. However, I do know one Stanford cubemeister, Jim McDonald, who 
can give the reason for every last q-turn he makes. His operators don't seem magical 
to him because he can see what they are doing at every moment along the way. In 
fact, he does not have them memorized as I do mine; he seems to reconstruct them as 
he unscrambles cubes, relying on his "cube sense". He is like an expert musician who 
can improvise where a novice must memorize. For interested readers, the central idea 
of Jim's method is first to solve the top layer except for one corner, and then to utilize 
the vertical "chimney" underneath that free corner as you might use a neighbor's 
driveway to turn your car around in. The other two layers are cleaned up by shunting 
cubies in and out of the "chimney/driveway". 

* * * 

Perhaps not coincidentally, the abstract approach has been carried -to its 
extreme by Singmaster's officemate, Morwen B. T.histlethwaite (I wonder what that 
"B" stands for!). He currently holds the world record for the shortest unscrambling 
algorithm. It requires at most 52 "turns". (A turn is defined as: either a q- turn or a 
half-turn-that is, a 180-degree turn of one face.) Thistlethwaite has used ideas of 
group theory to guide a computer search for special kinds of transforms. His 
algorithm has the curious property of not giving any appearance of converging toward 
the solved state at all-until the very last few turns. 

This must be contrasted with the - more conventional style. Most algorithms 
begin by getting one layer-usually the top layer-entirely correct. (In saying "top layer" 
rather than "top surface", I mean that the "fringe" has to be right, too: that is, the 
cubies on top must be correct-as seen from the side as well as from above.) This 
represents the first in a series of "plateau states". Although further progress requires 
any plateau state's destruction, that state will later be restored, and each time this 

Magic Cubology 


more order will have been introduced. These are the successive plateau states. 

After getting the top layer, the solver typically works on corners on the bottom 
layer, or perhaps on getting the horizontal equator slice all fixed up. Most algorithms 
can, in fact, be broken up into five or six natural stages, corresponding to natural 
classes of cubies that get returned to their home cubicles. My personal algorithm, for 
instance, goes through the following five stages: 

(1) top edges, 

(2) top corners, 

(3) bottom corners, 

(4) equator edges, and 

(5) bottom edges. 

In the first two of my stages, placement and orientation are achieved simultaneously. 
Each of the last three stages breaks up into substages: a placement phase and then an 
orientation phase. Naturally, the operators of any stage must respect all the 
accomplishments of preceding stages. This means that they may damage the order 
built up as long as they then repair it. They are welcome, however, to indiscriminately 
jumble up cubies scheduled to be dealt with in later stages. I find that other people's 
algorithms are usually based on the same classes of cubies, but the order of the stages 
can be completely different. 

Virtually all algorithms have the property that if you were to take a series of 
snapshots of the cube at the plateau states, you would see whole groups, of cubies 
falling into place in patterns. This is called "monotonicity at the operator level"-that 
is, a steady, visible approach toward START, with no backtracking. Of course, you 
would see something totally different if you took snapshots between plateau states-but 
that is another matter.. There is no known algorithm that makes visible progress with 
every turn! 

Very different in spirit is Thistlethwaite's algorithm. Instead of trying to put 
particular classes of cubies into their cubicles, he makes a "descent through nested 
subgroups". This means that, starting with total freedom of movement, he makes a 
few moves, then clamps down on the types of move that will thenceforward be 
allowed, makes a few more moves, clamps down a bit more, and so on, until the 
constraints become so heavy that nothing can move any more. But just at this point, 
the START position has been achieved! Each time, the clamping-down amounts to 
forbidding q- turns on two opposite faces, allowing only half-turns in their stead from 
then on. The first faces to be thus "clamped" are U and D, then come F and B, and 
finally L and R. The strange thing about this approach is that you cannot see START 
getting nearer, even if you take a series of snapshots at carefully chosen moments. 
Just all of a sudden, there it is ! It's as if you were climbing Everest and the peak were 
shrouded in clouds until the last 100 met .when suddenly the clouds break and there 
it- is! 

Magic Cubology 


This Thistlethwaite' algorithm thuggests a thorny thought: Wouldn't it be nice 
if there were an easy way to tell how far you are from START? We might call this a 
"distance-from-START-ometer". .Such a device would obviously be quite useful. For 
example, it is rather embarrassing to resort to the full power of a general 
unscrambling algorithm to undo what some friend has done with four or five casual 
twists. For that reason alone, it would be nice to be able to assess quickly if some state 
is "really random" or is close to START. But what does "close" mean? Distances 
between two states in this vast space can be measured in two fairly natural ways. You 
can count either the number of q-turns or the number of turns needed to get from one 
state to the other (where "turn", as above, means either a q- turn or a half-turn). But 
how can one figure out how many turns are needed to get to START without doing an 
exhaustive search? A reliable and at least fairly accurate estimate would be preferable, 
one that could be carried out quickly during a cursory inspection of the cube state. A 
nave suggestion is to count the number of cubies that are not in_ their home cubicle. 
This estimator, however, can be totally fooled by the Dots position, in which nearly 
all cubies are on the "wrong" side (see Figure 14-9a). That position is only eight q- 
turns away from START. Perhaps the flippancy and the number of quarks could also 
be taken into account by a better estimator, but I don't know of any. 

There are sophisticated group-theoretical arguments suggesting that-the 
farthest one can get from START is 22 or 23 turns. This is quite striking, considering 
that most solvers' early algorithms take several hundred turns, and highly polished 
algorithms take a number somewhere in the 80's or 90's. Indeed, many mere operators 
take considerably more turns than Thistlethwaite's entire algorithm does. (My first 
double edge-flipper, for instance, was nearly 60 turns long.) 

One result that can be demonstrated easily is that there exist states at least 17 
turns away from START. The argument goes as follows. At the outset there are 18 
possible turns we might make: L, L', L2, R, R', R2, and so on. After that, there are 15 
reasonable turns to make. (One would not move the same face again.) The number of 
distinct turn sequences of length 2 is therefore 18 X 15, or 270. Another turn will 
contribute another factor of 15;: and so on. How long does it take before we have 
reached the number of accessible states? It turns out that 17 is the smallest number of 
turns that, will theoretically allow access to 4.3 X 1019 distinct states. Of course, not 
every turn sequence of length 17 leads to a unique state, not by a long shot, and so we 
haven't shown that 17 turns will reach every accessible state. We have simply shown 
that at least 17 turns are needed if you want to reach every state from START. So, 
conceivably, no two states are much more than 17 turns away from each other. But 
which 17 turns? That is the question. 

So far, only God knows how to get from one state of the Magic Cube to 
another in the minimum number of turns. "God's algorithm" is, by definition, the 
speediest recipe for solving the Cube from any state. A burning question of Cubology 
is: Is God's algorithm just a gigantic table without any 

Magic Cubology 


FIGURE 14-9. A number of special configurations deserving of names. In (a), the 
pattern known as Dots. In (b), Pons Asinorum. In (c), the Christman Cross. In (d), the 
Plummer Cross. In (e), a Meson ( showing what appears to be an isolated quark, but it 
is actually _ balanced by an antiquark on, the'opposiL corner). In (f), a Giant Meson, 
consisting of a "giant quark" and a "giant antiquark" on opposite corners. 

Magic Cubology 


pattern in it, or is there a significant mount of pattern to it, i6,ifiiit elegant and short 
algorithm based on it could be mastered by a mere mortal? Notice that possession of a 
distance-from-START-ometer would be tantamount to possession of God's algorithm. 
Given any scrambled -state, you tentatively try out all eighteen possible twists and 
then choose one that brings you closer to START (Why must there always be one?) 
Make it, and then repeat the process. It's a little arduous, but it gets you to START 
directly, obviating plateaus or other intuitive intermediary states. That's one reason for 
doubting that any simple such meter exists. 

* * * 

If God were to enter a cube-solving contest, It might encounter some rather stiff 
competition from a few prodigious mortals, even if they do not know Its algorithm. 
There is a young Englander from Nottingham named Nicholas Hammond who has got 
his average solving time down to close to 30 seconds! Such a phenomenal 
performance calls for several skills. The first is a deep understanding of the cube. The 
second is an extremely polished set of operators. The third is to have the operators 
down so cold that you could do them in your sleep. The fourth is sheer speed at 
executing twisty hand motions. The fifth is having a well-oiled "racing cube": one that 
turns at the merest twitch of a finger, eagerly anticipating every operator before it is 
needed. In short, the racing cube is a cube that wants to win. 

I have not yet heard of people naming their racing cubes, although that is sure 
to come. It would seem, though, that there is an correlation between having a colorful 
name and being a contributor to Cubology. Apart from Singmaster and Thistlethwaite, 
there is Dame Kathleen Ollerenshaw (late Lord Mayor of Manchester), who has 
discovered many streamlined processes, has written an article on the Magic Cube, and 
has the distinction of being the first to report an attack of Cubist's Thumb, a grave 
form of the disease mentioned at the beginning of this column. Then there is Oliver 
Pretzel, the discoverer of a delicious twisted 3-cycle and the creator of a lovely "pretty 
pattern" called the "6-U" state, which can be reached from START by way of the long 

L'R 2 F'L'B' UBLFRU'RLR s FsUsR S . 

Pretty patterns are of interest to, many cube lovers, but I cannot do them 
justice here. I can mention only a few of the best I know. A good warm-up. exercise is 
to figure out how to make the state called Pons Asinorum ("Bridge of Asses"). It is 
shown in Figure 14-9b. It has this name because, because as one MIT cubemeister 
remarked to me, "If you can't hack this one, forget about cubing." Then there are two 
kinds of cross, known to the MIT cube-hacking community as the Christman Cross 
and the Plummer Cross (see parts (c) and (d) of Figure 14-9). The former involves 
three pairs of colors 

Magic Cubology 


U-D, F-R, and' -L-B), while the latter involves two triples in the quark-antiquark 
style. My favorite pretty pattern is the "Worm", whose "genotype", or turn sequence, 

RUF 2 D'RsF s D'F'R'F 2 RU 2 FR 2 F'R'U'F'U 2 FR. 

'Then there is the Snake, a similar sinuous pattern that winds around the cube: 

BR S D'R 2 DR',B'R 2 UB 2 U'DR 2 D. 

If you cut off the Snake's tail (R2D') and instead stick on B2RaU2RaB2D', you will 
create a curious bi-ringed pattern. All of these are from pretty-pattern-meister Richard 
Walker. A beautiful pattern is the Giant Meson (Figure- 14-9f), made from a giant 
quark (a 2 X 2 X 2 corner subcube rotated 120 degrees) and a giant antiquark. If you 
wish, you can top it off, using quarkscrews to twist a standard-size quark and 
antiquark onto the corners of the giant quark and antiquark, like cherries on top of 
sundaes. I'll let you figure out how to make this one. 

* * * 

I would like to leave you with a set of hints and some things to think about. A 
difficult challenge, good for cubists at all levels of cubistry, is for someone to do a 
handful of turns on a pristine cube, to return it to you in this mildly scrambled state, 
and for you to try to get it back to START by finding the exact inverse word. 
Cubemeisters will be able to invert a bigger handful of turns than novices. Kate Fried 
reportedly can invert seven turns regularly, and once, after a full day of staring at the 
cube, she undid ten. (I can undo about four.) 

My royal road to discovering an algorithm is based on two challenging 
exercises involving corner cubies only. The preliminary exercise is as follows. 
Maneuver the four corner cubies with white on them to the top face with- their white 
facelets pointing upward. Do not worry about which cubie is in which cubicle. 
Simultaneously do the same thing on the bottom face (of course with its color 
pointing downward). The advanced exercise is to do the preceding one while in 
addition making sure that all the corner cubies end up in their proper cubicles. This 
amounts to solving the 2 X 2 X 2 Magic Cube puzzle, and it will take you a long way 
toward mastery of the Magic Cube. 

To help you with your edge processes, here is a wonderful trick discovered by 
David Seal, based on a type of operator called a monoflip. I'll give it to you as a 
puzzle. How can you make a double edge-flipper out of a process that messes up the 
lower two layers but leaves the top layer invariant, except for flipping a single edge 
cubie? Hint: The answer involves the important group-theoretical idea of a 
commutator-a word of the form 

Magic Cubology 


PQP'Q'. I will also leave it to you to find your own monoflip operator. After I found 
out about it, I incorporated this trick into my method. 

. Here is a small riddle: Why do 5- and 7- cycles crop up so often in an object 
whose symmetries all have to do with numbers such as 3, 4, 6, and 8? Where do cycle 
lengths such as 5 and 7 come from? A somewhat related question is: What is the 
maximum order a word can have? (The order of a word is the power you have to raise 
it to in order to get the identity. For example, the order of R is 4.) You can show that 
the order of RU, for instance, is 105, by inspecting its cycle structure. 

* * * 

Where do we go from here? I must mention that I have only scratched the 
surface of Cubology in this column. Rubik and ethers are working on generalizations 
of various types. There already is a Magic Domino, which is like two-thirds of a 
magic cube: two 3X3 layers (see Figure 14-10). You,u can rotate it by q-turns only 
about one axis; you must do half turns about the other two. In the START position, 
one face is entirely black, the other entirely white, and both faces have the numbers 
from 1 through 9 in order. The Domino thus resembles the 15 Puzzle even more 
strongly than the cube does. 

Various people have made 2X2X2 cubes, and such cubes may go on sale 
one day. You can make your own by gluing little three-cornered hats over each of the 
eight corners of a 3 X 3 X 3 cube. Readers will naturally wonder about such enticing 
possibilities as a 4 X 4 X 4 cube. Rest assured-it is being developed in the 
Netherlands, and it may be ready soon. Inevitably, there is the question of both higher 
and lower dimensionalities. Cube theorists are beginning to discuss the properties of 
higher-dimensional cubes. 

The potential of the 3 X 3 X 3 cube is not close to being exhausted. One rich 
area of unexplored terrain is that of alternate colorings. This idea was 

FIGURE 14-10. Erno Rubik 's Magic Domino, scrambled. 

Magic Cubology 327 

FIGURE 14-11. Two alternate colorings for the Magic Cube, presenting totally novel 
solving problems for the cubist. In both colorings, center orientations do matter. 
However, in (a), edge orientations make no difference, and in (b), corner orientations 
make no difference. 

mentioned to me by various MIT Cube hackers. You can color the cubies in a variety 
of ways (see Figure 14-11). Each new coloring presents a different kind of 
unscrambling problem. In one variant coloring, edge-cubie orientations take on a vital 
importance. In another variant, corner-cubie orientations are irrelevant and centers 
matter. Then, moving toward simplicity, you can color two faces the same color, 
thereby reducing the number of distinct colors by one. Or you can paint the faces with 
just three colors. An extreme would be to have three blue faces meet at one corner and 
three white ones meet at the corner diagonally opposite. Inspector General Semah 
says that on the cubes he saw, five faces had one color and the sixth face had another 

Who knows where it will all end? As Bernie Greenberg has pointed out: 

Cubism requires the would-be cubist to literally invent a science. Each solver 
must suggest areas of research to himself or herself, design experiments, find 
principles, build theories, reject them, and so forth. It is the only puzzle that requires its 
solver to build a whole science. 

Could Rubik and Ishige have dreamed that their invention would lead to a 
model and a metaphor for all that is profound and beautiful in science? It is an 
amazing thing, this Magic Cube. 

Magic Cubology 



On Crossing the Rubicon 

July, 1982 

0, cursed spite, 
that ever I was born to set it right! 

(Hamlet, Act I, Scene 5) 

ESE days, just "The Cube" will suffice; no one needs to say "Rubik's Cube" to 
be understood as making a reference to that great puzzle object. In fact, I have a Cube 
in the shape of a sphere, which I sometimes refer to as "the round Cube", but equally 
often merely as "that Cube over there". It has been sliced up in the proper way, with 
rotating "sides" and an inner mechanism that is the same as Rubik's design. And-what 
is even more marvellous -I have, what poses as a Cube but is most definitely not a 
Cube: a cubical object sliced in a strange diagonal way, which scrambles in a 
devilishly skew manner. Both these puzzles are illustrated in Figure 15-1 The sphere 
is, of course, a Cube, while the cube is an impostor in Cube's clothing. (Note: In this 
chapter, I use the word "cube" with lowercase V as a generic term for any 
scrambling-by-rotation puzzle, and with capital V to mean the original item: the 3 X 
3X3 Rubik's Cube.) 

This proliferation of varieties of cube is really an astonishing phenomenon. 
Erno Rubik and his somewhat eclipsed Japanese counterpart Terutoshi Ishige began 
it, but then it just took off like a prairie fire. Suddenly there were variations on the 
Cube turning up all over-little ones, teeny-weeny ones, prettily decorated ones, and so 
forth. But in some sense none of these was an essentially different puzzle from the 
Cube itself. All of them simply dressed the same internal mechanism in different garb. 
The first essentially different cubes I saw came from Japan. They were 2 X 2 X 2's! 
One was magnetic, with eight metal cubies sliding around a central magnetic sphere. 
The other was plastic, and had an intricate mechanism similar to, but not identical to, 
the Rubik-Ishige 3X3X3 mechanism. It could not be identical, since the keystones 
of the 3 X 3 X 3 mechanism are the six face centers-and in, a 2 X 2 X 2, there aren't 

On Crossing the Rubicon 


centers ! Later I found out that this mechanism is also due to Rubik, and is based on 
the 3X3X3 mechanism. This 2X2X2, shown in Figure 15 -2a, is such a 
wonderful, inevitable object-in some ways even more beguiling than the 3 X 3 X 3. 
So what puzzles me is: Why aren't they available all over? The 2X2X2 -Twobik's 
Cube?- seems to me an ideal stepping-stone from total novicehood to an intermediate 
level of cubistry, as it involves solving only the comers of a 3 X 3 X 3. 

Actually, the 2 X 2 X 2 was not quite the first essentially different cube I 
encountered. I had seen a Magic Domino (another Rubik invention-see -Figure 14-10) 
much earlier._ The Domino is like two of the three layers of a 3 X 3 X 3 cube. Its 
square top and bottom layers both can turn 90 degrees, but its four rectangular sides 
must turn 180 degrees to allow further moves. Another early variant was the 
Octagonal Cube, a cube four of whose edges had been shaved and which, when 
twisted, produced some rather grotesque shapes. (See parts (b) and (c) of Figure 15- 
2.) Since in this version some of the information about edge parities is lost (you can't 
tell whether the "shaved" edges are forwards or backwards in their cubicles), it has 
some quirks that make solving it slightly different from solving the full Cube. On .the 
full Cube, flipped edges always come in pairs. Here, the same is true except that since 
you can't see whether a shaved edge is flipped or not, sometimes you'll wind up with 
what appears to be a solved cube, with but a single flipped edge. The first time it can 
be quite confusing, if you are used to the full Cube! 

The next variation I encountered was one due to a young German named 
Kersten Meier, then a graduate student in operations research at Stanford. 

On Crossing the Rubicon 


He had built a rough working prototype of a Magic Pyramid. It was so rough, in fact, 
that it often fell apart as you twisted its sides. Nonetheless, it was clearly an 
innovative step, and deserved to be marketed. Hater found out that at nearly the same 
time, Ben Halpern, a mathematician at Indiana University, had come up with exactly 
the same concept. Both had generalized the Rubik-Ishige 3X3X3 Cube mechanism 
and had seen how e a dodecahedral puzzle on the same principles. Halpern built 
ing prototypes of both the pyramid and the dodecahedron. The Meier-Halpern 
variations are shown in Figure 15-2, parts (d) and (e). 

* * * 

As it turns out, Uwe Meffert, another German-born inventor, beat both Meier 
and Halpern to the pyramidal punch-but in a different way. Back in 1972, Meffert had 
been interested in pyramids and their pleasing qualities when held in the hand. 
Somehow, he devised the notion of a pyramid with twisting sides and invented the 
concept shown in Figure 15-3. He made a few and found them soothing to play with 
and helpful for meditation, but after a while he stored them away and more or less 
forgot about them. Then along came Rubik's Cube. Seeing its phenomenal success, 
Meffert realized that his old invention might have quite some potential value. So he 
quickly patented his design, made arrangements to have his device manufactured in 
quantity, and contacted a toy company for the marketing. The end result was the 
world success of the Pyraminx, a "pyramidal cube" (in my generic sense of "cube") 
that operates completely differently from the Meier= Halpern pyramid. 

Meffert, who now lives in Hong Kong, became deeply involved in the 
production and marketing end of his Pyraminx, and began traveling a lot. Through 
this he came in contact with other inventors in various parts of the world, and decided 
it would be a good idea to market the most interesting toys, of the cube family 
worldwide. Among these inventors were Meier and Halpern, and as a result, their 
pyramids too will soon be available to puzzle lovers the world over. They will be 
known as the Pyraminx Magic Tetrahedron. (I would have preferred "King Tet".) The 
dodecahedron will also be available, under the name. Pyraminx Magic Dodecahedron. 
(For a catalogue showing Meffert's complete range, write to Uwe Meffert Novelties, 
Pricewell (Far East), Ltd., P.O. Box 31008, Causeway Bay, Hong Kong. Incidentally, 
Meffert welcomes ideas for new "cubic" puzzles. He also wants to develop a Puzzlers' 
Club, in which members would subscribe at a yearly flat rate and receive in return six 
or more new puzzles a year. These would be limited editions of particularly complex 
or esoteric forms of cubic puzzles. He would like to hear from prospective members.) 

Dr. Ronald Turner-Smith, a friend of Meffert's in the Mathematics Department 
at the Chinese University of Hong Kong, has written a charming little book on the 
patterns and the mathematics of the Pyraminx,., 

On Crossing the Rubicon 






FIGURE 1 5-2 . A number of variations on the theme nj the Mngii Cube. In (a), a 2x2X2 
tube. The "Octagonal Prnw "(an octagonally shaved ?X ?X.7 cube ). shown in its fristiw state 
m (bl and scrambled in (c). In (d), the Pyramim Magic Dodecahedron; in (c). the Pyruminx 
Magic Tetrahedron; in (f), the Pyraminx Magic kosahedmn; in (g), the Pyraminx Ball; in 
(h), the Pvraminx Magic Crystal; hi (i), a 4x4X4 cube in a scrambled state; and in ij). the 
/\m ni inx I 'I lima te, 

On Crossing the Rubicon 




FIGURE 15-3. Vwt Miffert 's Pymminx, In (a), a scrambled state. In (h) and (e), modes of 
twisting are shown. Turns of the form shown in (c) are the ones that the. official nntation is based 
on. In (d), names for the four 120-degree rfotkwise turns: L (left), R (nght), T (top), and 
B (back). 

On Crossing the Rubicon 


On Crossing the Rubicon 


called The Amazing Pyraminx, which is available in paperback through Mcffert. In it, 
Turner-Smith does for the Pyraminx what David Singmaster did for the Cube in his 
Notes on Rubik's 'Magic Cube'. (Incidentally, Singmaster is continuing in his role as 
world clearinghouse for Cubology. He now puts out a newsletter amusingly titled 
Cubic Circular, available by writing to David Singmaster, Ltd. at 66 Mount View 
Road, London N4 4JR, England. Finally, I should mention that a quarterly magazine 
called Rubik's will be coming out of Hungary beginning this summer, available for $8 
a year. Write to P.O. Box 223, Budapest 1906, Hungary.) Like Singmaster, Turner- 
Smith develops a notation and uses it to convey some of the group theory connected 
with it, which affords one a deeper appreciation of the object than mere mechanical 
solving does. 

It is interesting that there are two distinct ways of manipulating and describing 
the action of the Pyraminx. You can rotate either a face or a small pyramid. The two 
views are equivalent but complementary, since a face and its opposing small pyramid 
make up the whole object. Turner-Smith sees the small pyramids as movable and the 
faces as stationary. We shall adopt this view now, and law-.return to comment on the 
complementary one. Let us name the four possible moves, then. (See Figure 15-3d.) 
Each one rotates a small pyramid, either at the Top (T), Back (B), Left (L), or Right 
(R). The letters T, B, L, R stand for clockwise 120-degree turns, and T, B', L, R 
stand for counterclockwise 120-degree turns (as seen when looking at' the rotating tip 
along the axis of rotation). Notice that any move leaves all the vertices in place 
(although twisted). Therefore, one can consider the four vertices as stationary 
reference points, much like the six face centers of the Cube. In fact, at the very start of 
the solving process they can quickly be twisted to agree with each other, and from 
then on they provide an identifying color for each face. Thus one can consider the 
four tip-pyramids either as decorative ornaments or as useful signposts. 

In the Cube, the elementary objects that change location are usually called 
cubies or cubelets. What are the corresponding elementary objects here? They are not 
all just small pyramids. As on the Cube, it turns out that there are three types: edge 
blocks, middle blocks, and the above-mentioned tips. They are shown in Figure 15-4. 
As you can see, to each vertex there corresponds one middle block, having three 
"trianglets" of different colors, just as does the tip perched on top of it. Also like a tip, 
a middle block never leaves its home location, but only twists. As a consequence, the 
tips can be considered "trivially solvable" parts of the Pyraminx, and the middle 
blocks as "easily solvable". 

This leaves six edge blocks, each having two colors-, that can travel and flip, 
just like the edge cubies on a Cube. As a matter of fact, it turns out that the constraints 
on flipping and swapping edges are exactly analogous to those applying to the edge 
cubies on the Cube: two edges must flip at once, and only even permutations of edge 
locations-permutations where an even number of edge swaps have taken place-are 

On Crossing the Rubicon 


FIGURE 15-4. Naming four types of piece in a Pyraminx. In (a), a tip; in (b), an 
edge; in (c) a middle block; and in (d), another useful though non-basic unit: a small 

This means that one can quickly enumerate the number of different ways 
edges can be distributed about the Pyraminx. Without the constraints, the edges could 
be dropped into place in 6! (6 factorial), or 720 different ways -the first edge into six 
slots, the second into five, and so on. But the requirement that the permutation be 
even divides this by two, to give 360. Also, if unconstrained, each edge could be in 
either of its two orientations, thus giving 26, or 64, different possibilities-but once 
again, we must divide by 2 because of the flipping-constraint, thus getting 32 distinct 

On Crossing the Rubicon 


Multiplying these two figures together, we come up with 11,520 "interestingly 
different" states of the Pyraminx. Of course, if you want to take into account the 
middle blocks and the tips, each of them has 34 (or 81) ways of twisting, and they are 
quite unconstrained, so that you can inflate the figure up to 75,582,720 distinct 
scramblings altogether! Perhaps the most realistic figure discounts the tip orientations 
but counts the middle blocks. In that case, one has 81 X 11,520=933,120, 
"nontrivially distinct" states of the Pyraminx. 

The shortest solving algorithm now known takes 21 twists, and was 
discovered with the aid of a computer. It is easy to prove that from some positions one 
needs at least twelve twists to get back to START, but the nature of God's algorithm 
(which, by definition, always chooses the shortest possible route home) and the 
maximum number of twists it requires are unknown, as they are on the Cube. 

* * * 

When he designed the Pyraminx, Meffert was quite aware that there were, 
other ways to slice it up internally, even while keeping the same surface appearance, 
with nine trianglets per face. Therefore, he figured out some alternate internal 
mechanisms that allow richer modes of twisting. The object 1 have just described is 
called the Popular Pyraminx. The Master Pyraminx is a different kind, and is slated to 
become available. On it, above and beyond all the movements of the Popular 
Pyraminx, each edge can swivel about its midpoint by 180 degrees, thus allowing the 
exchange of any two, tips along with the flipping of a single edge piece. (See Figure 

FIGURE 15-5. Showing a physically distinct twisting mode, applicable only to the 
Master Pyraminx. 

On Crossing the Rubicon 


FIGURE 15-G, Meffert's Octahedron. In (a), a top view showing how small pyramids and 
tips can spin, much as on the Pyraminx. Here, however, the natural angle of twist is 90 degrees, 
in (b), another conceivable may that a "magic octahedron" could he made: with fates that can 
spin 120 degrees about their centers. The only manufactured item turns as shown in (a). In 
( c). a diagram demonstrating the mapping between the Octahedron '$ 90-degree vertex-centered 
twists and the Cube 's 90-degree face-centered twtsts. In (d), Stan hoots ' coloring scheme b\ which 
an ordinary 3x3x3 Cube emulates a Magic Octahedron, thus concretely demonstrating the idea 

On Crossing the Rubicon 


The flexibility requires each middle block: to break up into several pieces as well, 
some of which can travel all around the pyramid. Thus one has a much more 
complicated puzzle. The mechanism is exceedingly tricky, because during such 
swiveling, each of the two moving tips is in contact-with the rest of the-Pyraminx 
through a little invisible piece inside the now broken-up middle block. That little 
piece does not know to which edge it owes "allegiance". As a result, the invisible 
piece and the tip would together fall off (since that contact does not constitute a 
permanent link) were it not for a clever piece of engineering that allows each tip to 
"lock" its little piece to the appropriate edge piece before the swiveling starts, then to 
"unlock" it after the swiveling is over. Turner-Smith cites the number of scrambled 
states of the Master Pyraminx as being in excess of 446 trillion. 

Once bitten by the "cube bug", Meffert did not stop here, but moved further 
into the world of regular polyhedra. His next step was to design an eight-colored 
octahedron each of whose triangular faces is again divided into nine trianglets. How 
does it twist? Just as with the Pyraminx, Meffert perceived the possibility of various 
modes of twisting. It is interesting that the two equivalent ways of describing the 
twists of the Popular Pyraminx .become inequivalent when applied to the octahedron. 
Recall that these involved twisting either faces or small pyramids. The reason they 
were essentially equivalent is that the rotation of a face is complementary to the 
rotation of a small pyramid. However, on an octahedron, rotating a face 120 degrees 
is obviously not complementary to spinning a small pyramid (centered on a vertex) 90 
degrees. The distinction is shown in Figure 15-6, parts (a) and (b). Realizing this extra 
degree of freedom, Meffert designed a mechanism for each of the two ways of 

The octahedron that will soon be marketed (under the disappointingly clunky 
name Pyraminx Magic Octahedron) is the one in which the six small pyramids can 
spin. Thus there are three orthogonal axes of rotation just as in the Cube. This 
seemingly trivial resemblance to the Cube actually contains much more than a grain 
of truth. In fact, the Meffert Octahedron and 

the Cube amount to two surface manifestations of one deep abstract idea. To see how 
this comes about, notice that a cube and an octahedron are dual to each other: that is, 
the face centers of either shape form the vertices of the other shape. Thus the six face 
centers of a cube define an octahedron, and the eight face centers of an octahedron 
define a cube. 

Imagine a Cube, and, sitting inside it, the octahedron that its face centers 
define (see Figure 15-6c). Each twist of a face of the Cube induces a twist on the 
corresponding pyramid of the octahedron. Each scrambled position of the Cube seems 
thus to correspond to a scrambled position of the Octahedron. But this is not quite 
true. To see what is correct, one needs to see what maps onto what, in the 
correspondence of Cube and Octahedron. Like the Popular Pyraminx, the octahedron 
has tips, middle pieces, and edges. As before, the tips are largely ornamental, and the 
middle pieces rotate as wholes. Thus a middle piece on the octahedron (together with 
its decorative 

On Crossing the Rubicon 


tip) maps onto a face center on the Cube. This leaves only edge pieces on the 
Octahedron-and it is apparent that these, having two facelets, must map onto edge 
pieces on the Cube. Where does this leave the Cube's corners? Nowhere. They have 
no analogue on the Octahedron, which is a considerable simplification. 

To visualize the Cube- Octahedron correspondence properly, you have to color 
one of the puzzles in an alternate manner. Since the Cube is more familiar, let's see 
how it has to be altered to "become" a Magic Octahedron. The proper coloring, 
corner-centered rather than face-centered, is shown in Figure 15-6d. Stan Isaacs, a 
computer scientist and puzzlist par excellence, has made up one of his dozens of 
cubes to simulate a Meffert Octahedron. Someone fluent in solving the ordinarily 
colored 3X3X3 Cube will therefore find that their expertise does not quite suffice 
to handle Isaacs' strangely colored cube, because now the orientation of face centers 
matters! On the other hand, there is a corresponding simplification as well: "quarks" 
no longer exist on this cube. That is, there is no such thing as a twisted corner, simply 
because all the corner cubelets are white on all sides. 

All you need to solve this cube (or the Octahedron) is the ability to restore the 
edges and face centers (with the added novelty of orientations). Of course, not all 
"magic octahedra" will be equivalent to simple recolorings of the 3 X 3 X 3 cube, 
since they may not turn about those three axes. In particular, Mcffert's alternate 
twisting-mode for the octahedron (where faces twist 90 degrees) is quite unrelated to 
the Cube. 

In his 1982 catalogue, Meffert shows a picture of an icosahedron (guess what 
its name is!) whose twenty triangular faces are not subdivided at all; they move five at 
a time, swirling about any of the twelve vertices. (See Figure 15-2f) Since the 
movement is vertex-centered rather than facecentered, it should make you think of the 
icosahedron's dual solid, the dodecahedron. The dual puzzle would have face-centered 
movement, in the same way as the dual puzzle to the Octahedron, with its vertex- 
centered movement, is the Cube, with its face-centered movement. (Incidentally, what 
would be the dual puzzle to the Pyraminx? } 

In fact, in Meffert's catalogue are shown two other dodecahedral puzzles, 
reproduced in Figure 15-2g and Figure 15-2h, for your amazement and bemusement. 
The less complicated one with the asymmetric-looking slices is called the Pyraminx 
Ball, and the beautifully crisscrossed one is called the Pyraminx Crystal. The Ball has 
four axes of rotation, like the Pyraminx, while the Crystal has six. These should be 
hitting the market in midsummer. 

* * * 

At this point, you might well be wondering whether there could be a cube -I 
mean a genuine, six-sided, square-faced cube!-with a vertex-centered twisting 
mechanism. No sooner said than done! Tony Durham, a British journalist, was the 
first to think of this idea. He showed his design to Meffert 

On Crossing the Rubicon 


FIGURE 15-7. Tony Durham's Skewb, caught in mid-twist (a). In (b), the labeling of 
the' Skewb"s eight corners. (See also Figure 15-1.) 

who developed it into a marketable product by incorporating mechanical features that 
had proved useful on the Pyraminx. The object in question is shown at rest in Figure 
15-lb and in motion in Figure 15-7a. I call this the Shewb, although Meffert gives it 
the more prosaic title of Pyraminx Cube. 

Each of the Skewb's four cuts slices the whole into two equal halves. Each cut 
perpendicularly bisects one of, the four spatial diagonals of the cube. If you think 
about it, you will see that the shape traced out by each cut as you run around the 
cube's surface is a perfect hexagon. Each cut crosses all six faces, so that every turn 
affects all the faces at once. In this respect, the Skewb is more vicious than the Cube, 
where on each turn two faces are exempt from change. Despite the simplicity of this 
object, it is quite hard to get used to its skew twist. Of course, that is part of its charm. 

Durham offers some insightful commentary on his invention in a remarkable 
set of notes he has written entitled "Four- Axis Puzzles". I would like to quote a few 
paragraphs from this document. 

The symmetry group, generated by four threefold axes is the rotation 
group of the tetrahedron, and has order twelve. Almost all the well-known 
polyhedra, regular as well as semiregular, possess this tetrahedral symmetry, 
though their own symmetry may be much richer. So a four-axis mechanism may 
be put inside a polyhedral puzzle of any regular or semiregular shape, and the 
puzzle will keep its shape during play.. The Pyraminx Ball may look odd at first 
glance, but it illustrates the beautiful way in which tetrahedral symmetry is 
buried in the richer symmetry of the dodecahedron. 

The cube mechanism found by Rubik does not have this property. It uses 
fourfold rotation axes, which are generally found only in the cube/octahedron 
family of solids. Thus, it is possible to 'build out' a Rubik cube into the shape of 
a dodecahedron. But to preserve that shape during play you must restrict 

On Crossing the Rubicon 


yourself to half-turns. Quarter-turns invoke a symmetry which the 
dodecahedron does not possess. 

All four-axis puzzles have a central ball or spindle. Four pieces (usually 
corners) are pinned directly to the ball. The standard ' Pyraminx has six free- 
floating edge pieces with "wings' that hook under the corner pieces. The 
analogous free-floating pieces on the Pyraminx Cube are the square face- 
centers. The four-faceted pieces on the dodecahedral Pyraminx Ball play the 
same role. 

The Pyraminx Cube and Ball have four more free-floating pieces, which 
again are corners. These pieces have their own 'wings' which, in the START 
position, hook under the first set of free-floating pieces. Thus, there is a three- 
level hierarchy of interlocking pieces, conceptually similar to Rubik's, but 
geometrically very different. 

All eight corners of the Pyraminx Cube look alike. At first sight one 
might think that any two corners could be made to change places. In fact, four 
of the corners are free-floating and four are rigidly fixed to the central ball. The 
two types can never change place. The square shape of the face center pieces is 
deceptive, too. Inside, the mechanical parts of the square pieces are not so 
symmetrical. Such a piece can never return to its starting position (relative to 
the rigid set of four corners) rotated by 90 degrees. Only half-turns are possible. 

The standard Pyraminx has obvious fixed points-the four corners. 
Confronted with a Pyraminx Cube and knowing that four corners are fixed and 
four are free, one naturally wonders which are which. Actually it makes no 
difference. The four free corners move independently of the fixed ones, but they 
always move together as if physically linked. 

Durham proceeds to give Turner-Smith's TBLR-T'B'L'R' notation for the 
Pyraminx, and mentions that it is adaptable to any four-axis puzzle (such as his 
Skewb), simply by letting TBLR name four of the centers of rotation. (On the 
Pyraminx, this could mean either the four tips or the four face centers. On the Skewb, 
this would be four of the tips, leaving four other tips unnamed. See Figure 15-7b.) 
Then any move can be transcribed. If it is centered on one of the named spots, just use 
the proper notation. If it is centered on one of the four unnamed spots, use the name 
for the complementary move, since it doesn't matter which half of the puzzle twists. 
(You may want to think about that for a moment. Actually, it is obvious, but it sounds 
like a tricky point.) Durham points out that it is sometimes useful to have names for 
the four remaining spots and for twists around them. He lets t, b, 1, r fulfill that 
purpose. Thus T and t accomplish the same thing internally to the puzzle, but they 
leave it hovering in space in a different overall orientation. Although he concedes that 
it may become confusing, Durham advocates using a mixed notation on occasion. 

Sometimes you need to mix the notations to see what is going on. TbT'b' 
is one of the useful class of moves called commutators (two moves followed by 
their inverses-thus of the form xyxy'), though you would never guess so from its 
description in regular coordinates (TBL'B') or alternate coordinates (tlt'b'). 

On Crossing the Rubicon 


The Pyraminx Cube and Ball may be described as deep-cut puzzles in 
contrast to shallow-cut puzzles such as Rubik's Cube.. In the latter, the cuts are 
made near to the surface. In deep-cut puzzles, they slash close to the puzzle's 
heart. The bulk of a shallow-cut puzzle remains stationary while you turn a 
small part of it. A deep-cut puzzle, however, raises serious doubt as to which 
part has been turned and which has remained stationary. This is why alternate 
sets of coordinates have to be taken seriously on deep-cut puzzles. 

Deep-cut puzzles also dictate a 'global' approach to solution. It is 
peculiarly difficult to work on one area of the puzzle without affecting the rest. 
However, as solution proceeds, this very fact comes to your aid. Pairs of corners 
magically untwist in synchrony. The last flip, the last swap is done for you 
automatically. As you close in for the kill, billions of pathways down which the 
puzzle might escape are closed off to it. Parity constraints are at work, and when 
every move activates five or eight interlocked permutation cycles-as it does in a 
deep-cut puzzle-parity constraints are powerful. 

In the section of his notes having to do with parity constraints, Durham, 
includes the following humorous but insightful apology: 

Please forgive the loose use of the term parity to include tests for divisibility by 
3 (not only 2) or even more distant concepts. We shall We the term parity 
restriction for any constraint on imaginable transformations of the puzzle that 
prevents their accomplishment in normal operation of the puzzle. The list does 
not, for example, include the rule: 'Thou shalt not swap a face piece with a 
corner piece.' It is just too far-fetched. One might as well try to imagine a move 
that transformed the entire puzzle into depleted uranium or Gorgonzola cheese. 

Then he lists all the Skewb's "parity" constraints, in his generalized sense of the term. 

1. The four (fixed),corners TBLR may be permuted among themselves, as may 
the remaining four corners tblr, but mixing between the two sets is 

2. TBLR themselves move as a rigid tetrahedral unit. This constraint applies to 
their positions in space only (not to their orientations). 

2a. For exactly the same reasons, the remaining four (free) corners tblr move as 
a tetrahedral unit. They move independently of TBLR. In fact any of the 
twelve possible relative positions of tblr and TBLR can be reached in at most 
two puzzle moves. 

Although TBLR are fixed and tbir are free-floating, mathematically 
speaking, 2 and 2a have exactly the same status. Writers on the Rubik Cube 
have generally regarded the transposition of two face centers as an 
'unimaginable' transformation, while the swapping of two edge pieces is 
'prohibited but imaginable'. By analogy with this convention, 2a counts as a 
parity restriction while 2 does not! This is plainly unsatisfactory, and a better 
and more precise definition of 'parity' is badly needed. Is it a question of 
geometry? Of mechanics? Of topology? Note that the problem is in 
enumerating the impossible positions. The possible positions are readily - 

On Crossing the Rubicon 


3. The sum of the twists of corners TBLR is always equal, modulo 3, to the 
twistedness of the puzzle, taken as a whole. 

(Here, twist applies to corners, and is either 0, + 1, or -1. A corner's twist is 
measured relative to the rigid tetrahedron to which it belongs. Thus the twist 
of T is measured relative to TBLR. A clockwise rotation of a corner counts 
as + I, counterclockwise as - 1. By contrast, the twistedness of the puzzle as a 
whole is a function only of the positions of the corners, not of their 
orientations. If the relative positions of TBLR and tblr are as in the START 
position, then the twistedness is 0. If they can be restored to START by one 
clockwise puzzle move, the twistedness is -1, and if by one counterclockwise 
move, then + 1. If it takes one of each type, then the twistedness is again 0.) 

3a. Same as 3, only with tblr. 

From 3 and 3a, it follows that the total twist of TBLR always equals the total 
twist of tblr. Also, it follows that it is impossible to turn a single corner by 
120 degrees (i.e., to create an isolated quark). One might paraphrase 3 and 3a 
by saying that the puzzle 'knows', in three distinct ways, how many turns it is 
away from START (modulo 3). 

4. It is impossible to transpose exactly two face pieces. 

5. It is impossible for any face piece to turn in place by 90 degrees. 6. It is 
impossible to flip a single face piece through 180 degrees. 

Durham offers proofs of these interesting facts, but as they are for the most part 
analogous to those on the Cube, I shall omit them here. Bycombining all these 
constraints, Durham comes up with the total number of '-scrambled states of his 
Skewb, which is 100,776,960. However, this assumes you have a way of telling the 
orientation of a face center, which (unless you mark, it up) you don't. Hence the 
number of visually distinguishable states is reduced by five factors of two, to 
3,149,280-a rather smaller number than for the Cube (4 X 1019), but certainly the 
difficulty does not scale down proportionately with the number of states. (Could 
you even imagine what it would mean for a puzzle to be "ten trillion times easier" 
than Rubik's Cube?) 

* * * 

Durham's final observations carry Solomon Golomb's beautiful analogy between 
cubological phenomena and those of particle physics to even greater heights. 
Golomb pointed out that many fundamental particles have their counterparts on the 
3X3X3 Cube. They include the quarks (q), antiquarks (q), mesons (qq pairs), 
baryons and antibaryons (qqq and qqq trios). Durham extends the analogy as 

The definition of twist must be modified for the purpose of particle physics. 
A clockwise twist of one of the corners TBLR is now given the value + 1/3, 
as is a counterclockwise twist of any of the corners tblr. Either of these is a 
quark. Its opposite is an antiquark with value -1/3. It will be seen that twist 
corresponds to baryon number. The total twist of all corners is always an 
integer. A single puzzle move is always a meson 

On Crossing the Rubicon 


Quarks at the corners TBLR will be regarded as 'up' or u quarks; those at tblr 
will be 'down' or d quarks. Both quarks have isotopic spin 1/2. They are 
distinguished by the orientation of the isospin vector in its abstract space. 
The projection of the isospin, I, , has the value + 1/2 for the u quark and -1/2 
for the d quark. In the absence of strangeness, charm, etc., the electric charge 
Q of a particle is given by Q=Ix +B/2, where B is the baryon number. So u 
quarks have charge 2/3, while d quarks have charge - 1/3. (All the quantum 
numbers are multiplied by -1 for the antiquarks.) Again the puzzle models an 
important feature of observed reality: all particles have integral electric 

The relevant quantum numbers for our two quarks are as follows: 















We can now assemble various hadrons (strongly interacting particles), as 
shown in the table below. Each particle is represented by two rows having 
four symbols each. The four places in the top row represent the twists on the 
TBLR corners; in the bottom row the same is done for the tblr corners. A 
quark is denoted by '+', an antiquark by '-'. 

+ (w+ meson) 

- (ad) 

+ + + (A ++ ) 


- («•" meson) +-0 (rf meson) (tt° meson) 
+ (Od) (*5) +-0 (dd) 

Isotopic' symmetry is a global symmetry, and the strong (nuclear) force is 
invariant under transformations that rotate the isotopic spin vector by the 
same amount for all particles. Such a transformation would, in a continuous 
fashion, transform all u quarks into d quarks and vice versa. Protons and 
neutrons would swap roles. The analogous process for the puzzle is the 
continuous rotation of the whole puzzle in space. It can indeed bring the 
TBLR corners to the former position of the' tblr corners, so that an up quark 
becomes a down quark. 

This makes no difference to the 'strong interaction' (i.e., the normal 
operation of the puzzle). The TBLR and Iblr corners are functionally 
identical. But it. matters, if you try to dismantle the puzzle: you will find that 
one set of corners is fixed to the core, and one is not. Such dismantling 
operations can be thought of as weak or electromagnetic interactions, which 
can break the conservation rules obeyed by the strong interaction. Actually 
they break the rules rather too well, since they allow the creation of single 
free quarks. 

Durham points out that the analogy still has weaknesses, such as the facts 
that neither charge nor baryon number is conserved, that there is no 

+ +0 (proton) +0 (neuiron) 

+ (uud) ++00 (add) 


0000 (A) --00 (amiproton) 

+ + + (ddd) - (uud) 

On Crossing the Rubicon 


analogue to spin, that only two "flavors" of quark are represented (up and down), 
and that quark "color" is not modeled. Golomb, in the meantime, has been actively 
trying to find a way of modeling quark color in the 3 X 3 X 3 Cube analogy. 
Whatever the failings of this analogy, I find it one of the most provocative .of all 
analogies I have ever encountered anywhere, and will be most astonished if it is 
purely coincidental. I somehow cannot help but believe that the fascinating 
patterns shared by these macroscopic puzzles and the microscopic particles reveals 
some underlying order and set of principles common to both. Indeed, I have faith 
that, if looked at in the proper way, the group-theoretical principles that govern 
these parity constraints on "cubes" can be transferred to the domain of particle 
physics, and yield fresh insights about the reasons for the symmetries among 
particles. There! If that doesn't prod some particle physicist into looking into this, I 
don't know what will! 

* * * 

Perhaps my favorite "cube" is the one I dubbed the IncrediBall. It is due to a 
German educator from Dortmund named Wolfgang Kuppers, and is in Meffert's 
catalogue. As of the time of this writing, I may be the world's fastest IncrediBall 
solver (or at least the fastest on my block!), with an average time of about six 
minutes. However, I am sure that my glory will not last long, once this puzzle is 
marketed widely by the Milton Bradley Company sometime this summer. Their 
trade name for it will be ,1 assi*Ball. It is pictured in Figure 15-8. 

This I-Ball is basically a rounded-off dodecahedron each of whose twelve 
faces (dodecalets, I'll call them) has been subdivided into five elementary 
"trianglets". Thus there are 60 such trianglets. If, instead of seeing them in groups 
of five, you take them three at a time, you'll find that they define a rounded-off 
icosahedron (the dual of the dodecahedron). Such a group of three trianglets I call 
an icosalet, and there are twenty such, each one having a unique arrangement of 
three colors. The icosalets are the elementary, unbreakable units out of which the 
IncrediBall is constructed; they correspond to the cubelets on the Cube, or the 
elementary pyramids of the Pyraminx. Whereas on the Cube there are three kinds 
of cubelet (edges, faces, and corners), here all icosalets are of a single type. For 
this reason, the I-Ball is less forbidding than at first it might appear. Its pristine 
state is one in which each dodecalet is all of one color. Meffert has used only six 
colors, rather than twelve, each color being used in two antipodal dodecalets, but 
this does not in any way change the difficulty of the puzzle. 

The way it turns is a little surprising. Any group of five icosalets that meet at 
a point (the center of a dodecalet) form what I call a circle, which will rotate as a 
unit, twisting 72 degrees to the left or right. (Such a circle is analogous to a 
"layer"-a face together with its fringe-on the Cube.) Thus five such 

On Crossing the Rubicon 


FIGURE 15-8. Wolfgang Kuppers' IncrediBall (or Impossi*Ball, if you wish). In 
(a), the pristine state. The triangles with curved sides are called icosalets. In (b), 
an IncrediBall caught in the midst of a "bumpy twist" - Each such twist involves 
rotating a "circle" (composed of five,,: icosalets) through 72 degrees. In (c), a state 
with just one quark visible (one icosalet twiste -120 degrees clockwise). In (d), one 
icosalet has been removed. This allows another icosalet to s in and occupy the 
vacuum, meanwhile leaving behind its own vacuum. As an icosalet-shaped glides 
around the puzzle, order can be created or destroyed This sphere-based puzzle 
thus c resembles Sam Loyd's planar "15 puzzle' 

On Crossing the Rubicon 


twists return that group to its starting position. However, the "circle" defined by the 
five icosalets is not truly circular, and if the trianglets were rigidly held at a fixed 
distance from the center, it simply would not be possible to rotate such a group. 
But Meffert's mechanism ingeniously gets around that problem by having the 
icosalets lift up slightly as they go over 'bumps", so that the solid flexes noticeably. 
As a result, twisting the I-Ball has a delightful "organic" feel to it. 

The constraints here are the same old story: all permutations are even, 
which means you cannot swap two icosalets-the best you can do is cycle three of 
them, or swap two pairs simultaneously; and of course, quarks and antiquarks must 
add up to a total twist that is integral. Taking into account these constraints, I 
calculate that the total number of IncrediBall scramblings is 
23,563,902,142,421,896,679,424,000, or 24X 1024-about 24 trillion trillion. This 
is not quite a million times larger than the figure for the Cube. It's also about 40 
times larger than Avogadro's number, for whatever that's worth. 

How hard is it to solve this puzzle? Is it harder than the Cube? I found + t 
easier, but that's hardly fair, since I had already done the Cube. However, in 
Durham's terms, the IncrediBall is decidedly a "shallow-cut" puzzle, which means 
that a more or less local approach will work. I found that, when I loosened my 
conceptual grip on the exact qualities of my hard- won operators for the Cube, and 
took them more metaphorically, I could'transfer some of my expertise over from 
Cube to I-Ball. Not everything transferred, needless to say. What' pleased me most 
was when I discovered that my quarkscrew" and "antiquarkscrew" were directly 
exportable. Of course, it took a while to discover what such an export would 
consist in. What is the essence of a move? Which aspects of it are provincial and 
sheddable? How can one learn to tell easily? These are very difficult questions, to 
which I do not have the answers. 

I gradually learned my way around the IncrediBall by realizing that a 
powerful class of moves consists of turning only two overlapping "circles" in a 
commutator pattern (xyx'y'). So I studied such two-circle commutators on paper, as 
shown in Figure 15-9, until I found ones that filled all my objectives. They 
included quarkscrews, swaps, and 3-cycles, which form the basis of a complete 
solution. In doing this, I came up with just barely enough notation to cover my 
needs, but I did not develop a complete notation for the IncrediBall. This, it seems 
to me, would be very useful: a standard 'universal notation, psychologically as well 
as mathematically satisfying, for all cubelikeles. However, it is a very ambitious 
project, given that you would have to anticipate all conceivable future variations on 
this fertile theme-hardly a trivial undertaking! 

It is interesting that my diagrams of overlapping circles turn out to be closely 
connected with another lovely family of generalizations of the Cube, due to a 
Spanish physicist named Gabriel Lorente. His puzzles are mostly planar and 
consist precisely in networks of overlapping circles. (See Figure 

On Crossing the Rubicon 


FIGURE 15-9. Operators involving just two overlapping circles. In (a), names for 
the four possible 72-degree twists. In (b), the operator "doui " (down-out-up-in) is 
applied. Note that it has the form of a commutator, involving alternating inverses. 
In (c), the outcome is summarized: a double swap has been effected. 

15-10.) The planar ones he calls the Grill and the Trebol. In each of them, circles 
can be given partial twists and pieces of them are thereby shuffled and 
redistributed. Extending this notion to a spherical surface, Lorente came up with an 
elegant IncrediBall-like puzzle, which he calls the Florid Sphere. 

When you look closely at Lorente's puzzles, the IncrediBall, and even the 
Cube, you begin to see that the essence of all these puzzles seems to reside in 
overlapping orbits. In fact, one could even maintain that the three-dimensionality 
of all these puzzles is irrelevant; their interest is essentially due only to the 
properties of intricately overlapping closed orbits in a two-dimensional space, 
possibly curved like a sphere. 

The quintessential planar overlapping-circle puzzle was invented, as it turns 
out, way back in the 1890's, although recently it has been repeatedly rediscovered 
in the wake of the Cube. All such puzzles basically involve two circles of marbles 
that intersect at various spots. (See Figure 15-11.) You can choose to cycle either 
circle, and the marbles at the intersections will thus be absorbed, into whichever 
circle is moving. 

While we're discussing two-dimensionality, it is worthwhile pointing out 

On Crossing the Rubicon 


FIGURE 15-10. Four puzzles by Gabriel Lorente. In (a) and (b), two schemes he 
calls "Grills " Note that both are based on a square lattice of circle centers. In (c), 
his "Trebol "puzzle, where centers form a triangular lattice. In (d), the centers of 
circles lie on a sphere. This is his "Florid Sphere" : Which previously discussed 
puzzle is it equivalent to ? 

that the Incredi&all's internal construction allows it to be transformed rather 
amazingly into what I call the "19" puzzle-a two-dimensional curved-space version 
of Sam Loyd's famous "15" puzzle (the 4X4 square puzzle with one "squarelet" 
removed, allowing you to rearrange the remaining 15 squarelets by shifting the 
hole about). This was first observed by Ben Halpern, while he was idly playing 
with an IncrediBall. He had removed one single icosalet (which is possible, one of 
the beauties of the IncrediBall being that its mechanism readily allows disassembly 
and reassembly),, leaving a hole, and he observed that, because all icosalets are 

On Crossing the Rubicon 


FIGURE 15-11. Does this 90-year-old type of two-dimensional puzzle, with just 
two intersecting rings of marbles, capture the ultimate essence of all modern 
"cubic" puzzles? 

congruent, the hole could wander about all over the sphere, just like the square 
hole in the 15 puzzle. (See Figure 15-8d.) Again, this seems to underscore the two- 
dimensional nature of these puzzles. 

The claim that these puzzles are two-dimensional comes from the fact that 
only pieces on their surfaces move; there is. no exchange between the interior and 
the exterior. For an extreme case, imagine the Earth as a giant puzzle, its entire 
surface covered with trillions of overlapping circles of marbles. With a hundred 
million turns, you could ship a marble from New York to San Francisco. Clearly 
this would be in essence a two-dimensional puzzle. The smallness of the circles 
relative to the size of the Earth makes this obvious. (However, I surely wouldn't 
want to think about solving such a puzzle, whether it's two-dimensional or not!) 

By contrast, consider two objects about to come out: Ideal's 4X4X4 cube, 
tastelessly marketed as Rubik's Revenge, and Meffert's Pyraminx Ultimate, a 5 x 5 
X 5 with shaved corners. Both are shown in Figure 15-2, parts (i) and, (j). In these 
objects, there are circles on a much more global scale. Namely, the 4 X4 X4 has an 
"Arctic Circle",, a "Tropic of Cancer", a "Tropic of Capricorn", and an "Antarctic 
Circle". The Pyraminx Ultimate has an Equator as well. 

On Crossing the Rubicon 


O the S X 3 X S Cube, one could get affray with' ignoring the Equator by 
describing equatorial twists in terms of their complements, like rotating the slices 
of bread instead of the meat in a sandwich. Singmaster's notational choice for the 3 
X 3 X 3 Cube reflects his propensity to describe face centers as stationary. Thus 
for him, bread slices move while the meat stays put. Theoretically, this is fine, but 
realistically, people just do not hold their sandwiches-pardon me, their Cubes-in 
one fixed orientation. Moreover, when you pass to higher orders, this view will not 
suffice. Imagine a multilayer club sandwich with three slices of bread and two 
different kinds of meat. For this, you simply have to expand your notational 
horizons ! 

An elegant set of names for the six possible meat-slice, or equatorial, 
moves on the 3 X 3 X 3 Cube has been suggested by John Conway, David 
Berlekamp, and Richard Guy in their book Winning Ways. They employ Greek 
letters with clever mnemonic justifications. These are shown in Figure 15-12. With 
some modification, they could be adapted to slices on higher-order cubes. 

Slice moves of this more global sort are like giant circles of marbles 
stretching around the Equator or the Tropic of Capricorn; their radii are of the 
same order of magnitude as the radius of the underlying threedimensional object. 
The topology of linkage of circles becomes much more complicated than in the 
case where the circles are"small and every connection is very local. To describe the 
linkage economically, one would be forced to talk about the way the circles are 
embedded in 3-space. In this sense, the higher-order cubes can truly be said to be 
intrinsically three-dimensional puzzles. 

There are, it seems, endless new spinoffs of the Cube being created. It is 


*5j* (gauche) 

(X (away) 



FIGURE 15-12. The Conway-Berlekamp-Guy nomenclature for twists of a 
3X3X3 Cube's equatorial slices. This notation can be generalized to higher- 
order cubes. 

On Crossing the Rubicon 


a very fertile idea. H. J. Kamack and T. R. Keane, both chemical engineers, sent 
me a beautiful paper in which they describe their simulation of a four-dimensional 
3X3X3X3 cube on a computer. They call it Rubik's Tesseract, and they have 
computed the number of possible states it has. That number is: the product of 24! 
X 32! X 16!/4 (the number of permutations of position of the elementary "tessies 
out of. which it is built) with (224/2) X (632/2) X (1216/3) (the number of legal 
orientations of the tessies within their niches, which the authors somewhat 
hesitantly term "tessicles", by analogy with "cubicles"). This number comes to 
approximately 1.76 X 10120, which, they point out, is about the same size as the 
number of possible games of chess. (I don't think it would be an exaggeration to 
say that if Ideal were marketing this puzzle, their publicity would sharrllessly 
proclaim, "Over 3 trillion combinations!") Kamack and Keane have made many 
provocative discoveries, which unfortunately I have no space to report on at this 
time. I was also sent a fascinating paper by George Marx and Eva Gajzago, two 
physicists at the famous Roland Eotvos University in Budapest. In it they give a 
definition of "entropy" on the Cube and. describe some statistical results computed 
by a grammar school student named Victor Zambo. These are matters I would like 
to go into at some future time. 

* * * 

I would like to close by discussing the astonishing popularity of the Cube. 
In the New York Times' paperback bestseller list of November 15, 1981, three 
cube booklets figured on the list. The positions they occupied? First, second, and 
fifth. People often ask, "Why is the Cube so popular? Will it last? Or is it just some 
sort of fad?" My personal opinion is that it will last. I think that the Cube has some 
sort of basic, instinctive, "primordial" appeal. Its conceptual pizzazz comes from 
the fact that it fits into a niche in our minds that connects to many, many general 
notions about the world. So here is an. attempt to characterize that quality. 

• To begin with, the Cube is small and colorful. It fits snugly in the hand 
and has a pleasing feel. Twisting is a fundamental and intriguing motion that the 
hand performs naturally. The object itself has overall symmetry, so that it can be 
rotated as a whole without its "feel" changing. (This is in contrast to many puzzles 
that have at most one axis of symmetry.) Quite surprisingly, there are not many 
puzzles or toys that give the mind and fingers a genuine three-dimensional 

• Although it gets all scrambled up, the object itself stays whole. (This is in 
contrast to many Humpty-Dumpty-ish puzzles that come apart into, scads of pieces 
that may get scattered all over the floor.) That it . manages to stay in one piece 
when it has so many independent ways of twisting is initially amazing, and 
remains mysterious even after you've seen its "guts". 

On Crossing the Rubicon 


• T'he object is a miniature incarnation of that subtle blend of order and 
chaos that our world is. Most of the time, you just cannot predict what 
repercussions even simple actions will , have-they simply have too many side 
effects. A few tiny actions can have vast, interlocking consequences, and become 
practically un-undoable. One can easily become paralyzed by fear, not wanting to 
make any move at all, sensing that with no trouble at all one can get totally, 
irretrievably, hopelessly lost. 

• There are plenty of patterns, some attainable, some unattainable. 
Sometimes they are simple to generate, but one' can't see how they emerge. 
Sometimes they are hard to generate, yet one clearly understands how they arise. 

• There are many routes to any state, and the shortest is nearly always 
completely unknowable. The solution to a difficult situation is hardly ever to back 
out the way you came in, but to find an alternate and completely different escape 
route. One feels a little like someone trapped in a cave with no light, unable to 
sense the whole space, able to grope about only very locally, wondering whether it 
is even humanly possible to have such an overview. (One wonders about God's 
algorithms Is it humanly comprehensible?) 

• The Cube is a rich source of metaphors. It, furnishes analogies to particle 
physics (quarks, etc.), to biology (a move-sequence as a "genotype" and the pattern 
it codes for as a "phenotype"), to problem solving in everyday life (breaking a 
problem into parts,, solving it stage by stage), to entropy and path-finding, and on 
and on. It even touches theology ("God's algorithm") and many other phenomena. 

• There are different approaches to understanding the Cube. In particular, 
there is a strong contrast between the "algebraic" approach and the "geometric" 
approach. In the algebraic, or mathematical, approach, long sequences of 
operations are compounded out of shorter sequences,, so that after a while one has 
no idea why one is doing the various individual twists-one just relies on the 
sequences, as wholes, to work. Though efficient, this is risky. In the geometric, or 
commonsense, approach, eye and mind combine to choose twist after twist, each 
twist having a clear reason as part of a carefully charted pathway. Though 
inefficient, this is reliable. These approaches, of course, can serve as metaphors for 
styles of attacking problems in life. 

• The Cube's universe has a strange population. Aside from its varieties of 
"cubies and modes of twisting, there are such intangible qualities as "flippedness" 
or "twistedness", which one quite literally moves about on the Cube (e.g., in the 
form of quarks), just as one moves the tangible cubies Similarly, the word "here" 
can designate a "place" that moves to and fro during a sequence of twists. The 
interlocking and nested reference frames that one jumps between in trying to 
restore order to the Cube vividly exemplify the layered way in which we 

On Crossing the Rubicon 


conceive of space-indeed, v the layered 'may in which concepts themselves are 
structured in-our minds. 

* Among the Cube's less intellectual charms are the magic of motion too 
swift for the eye, the thrills of speed, competition,, and grace; the varying levels of 
knowledge that one can gain, the enjoyment of exchanging information and 
insight. And, needless to say, the very idea that such a tiny innocent object 
conceals such a vast universe_ of potential. 

* Finally, consider the metaphor the Cube offers for the state of the world 
(one that has been exploited in various political cartoons). The globe is in a mess 
(as shown in Figure 15-13), and the leaders of various 

FIGURE 15-13. The sad state of the globe. 

countries want it to be "fixed". But they are unwilling to relinquish any tiny bit of 
order they have achieved. They cling to old, useless achievements because they are 
too fearful of letting go and temporarily abandoning what partial order they have in 
order to achieve greater order and harmony. They lack a mature, global view, one 
that recognizes that a willingness to make sacrifices in the short run can wind up 
producing much greater gains in the long run. 

I am confident that The Cube, as well as "cubes" in general, will flourish. I 
expect new varieties to appear for a long time to come, and to enrich our lives in 
many ways. It is gratifying that a toy that so challenges the mind has found such 
worldwide success. I hear that it's now very popular in China. 

On Crossing the Rubicon 


Perhaps one day it will even penetrate into the Soviet Union, to - my knowlege the 
last bastion of the Cube-Free World. 

Post Scriptum 

I wrote the preceding two columns over a year apart. It has now been two 
years since the second of them was written. The major cube news since then is, sad 
to say, that there has not been much major cube news since then. What apparently 
happened was simply a worldwide cube glut. Cubescubical and otherwise-were 
coming out of everybody's ears, and it was just a little too much. I can understand 
that, but it saddens me to see something so exciting fade so totally. 

There are still a number of things worth mentioning. A good place to begin 
is with the origin, of the cube-that is, of magic solids in general. Shortly after my 
second cube column appeared, I received a rather plaintive letter from a Fresno 
high school teacher named William 0. Gustafson, who claimed that he was, in 
some sense, the true inventor of the idea of the Magic Cube. What he actually had 
invented-in 1958-was a sphere sliced by three orthogonal planes into eight 
congruent pieces (octants), in such a way that any two opposite hemispheres (each 
composed of four octants) could turn. This amounts to a spherical 2X2X2 cube- 
a- cubical variant of which was marketed by Ideal Toy Company some twenty-odd 
years later under the name "Rubik's Pocket Cube". Gustafson called his toy 
"Gustaf son's Globe". 

To substantiate his claim, Gustafson enclosed photocopies of a good deal of 
correspondence he conducted in 1960 with numerous toy companies (he wrote 76 
of them!), the Japanese patent office (he received a patent)-and even Martin 
Gardner. Gardner's card to him was interesting. It said: 

That is an interesting puzzle that you propose, but I am at a loss for suggestions 
on how to interest a toy dealer in it. My experience has been that it is almost 
impossible to make any money with a puzzle unless you are the manufacturer 
yourself, with your own toy company. 

An interesting comment, in light of what happened with Rubik's Cube. In addition 
to his 2 X 2 X 2 Globe, Gustafson developed a 3 X 3 X 3 version, but - felt that he 
should work first on getting the simpler puzzle out, so most of his correspondence 
concerned that one. 

Gustafson also enclosed for me a photocopy of a wry letter of condolence 
sent to him by a former student who, when he encountered Rubik's Cube, vividly 
recalled Gustafson's Globe from decades earlier. The card read: "With sincere 
sympathy in your recent loss, and a hope that time has helped in some small way to 
ease the sorrow in your heart". Below those poetic 

On Crossing the Rubicon 


lines were the words "Gustafson's Globe", crossed out, and then the words "Rubik's 

I did a little bit of checking around, including talking with David 
Singmaster, probably the world's leading Cubological and Cubohistorical 
authority, and discovered that there is something to Gustafson's claim of priority. 
Not that it is likely that ErnO Rubik or Terutoshi Ishige ever heard of Gustafson. 
Nor is it by any means certain that Gustafson's Globe, had it been picked up by 
some toy company, would have been the overnight sensation that the Rubik-Ishige 
Magic Cube was. Still, though, it seems only fair to point out that people besides 
Rubik and Ishige had smelled some of the same alluring aromas in previous years, 
and for various reasons had not been able to arouse the interest of the world. 

* * * 

It is one of my firmest beliefs that good ideas almost never come out of 
nowhere, and that if a good idea arises in one person's mind, it is almost sure to 
have arisen in someone else's mind in some closely related version, or to do so 
very shortly. For that reason, whenever I am writing about a discovery or 
invention, I always try hard to indicate multiple credit when I can discover the 
people to whom genuine credit is due. The trouble is, when you bend over 
backwards to be equitable (notice how I always talk about Ishige and Rubik in the 
same breath, for instance), what inevitably happens is that someone you 
inadvertently slighted then writes you with some mixture of indignance, 
consternation, and disappointment, and requests equal time. 

I am glad to mention Gustafson's name and to give him credit for having 
had perhaps the world's first insight into this kind of three-dimensional rotational 
puzzle. But at the same time, I do not wish to leave out the names of Frank Fox, a 
British inventor who in about 1970 discovered-and patented-a 3X3X3 twisting 
sphere, and Larry Nichols of Cambridge, Massachusetts, who invented and 
patented a 2 X 2 X 2 cube around 1972, and who has just won a suit against Ideal 
Toy Co. for not giving him royalties on his invention. Whether Nichols' claim is 
any more deserving of retroactive compensation than those of Fox or Gustafson, I 
am not competent to say. 

All I can say is, these things get very, very messy-particularly when large 
amounts of money (or glory) are concerned. In the case of all three inventors- 
Gustafson, Fox, and Nichols-it seems clear that their inventions were far flimsier 
than the Rubik-Ishige Cube, and that the real reason the Rubik-Ishige Cube took 
off was that it could be manufactured and that it did hold together. But perhaps I 
am wrong. Perhaps it was a fluke of some sort that allowed Rubik (as contrasted 
with Ishige, for example) to get most of the credit. But whatever the case, it does 
illustrate my belief that people are extremely eager to attribute credit-even glory-to 
just one 

On Crossing the Rubicon 


person, and to vastly simplify a historical situation in order to be able to label it 
and classify it in their minds. 

Who is willing to take the trouble to sift through all the murk surrounding 
such monumental discoveries as relativity, Godel's incompleteness theorem, 
digital computers, lasers, pulsars, the cosmic background radiation, or the structure 
of DNA? Who wants to track down all the complexly tangled threads of ideas that 
somehow led to one or two people getting all the glory? Almost without exception, 
if you dig deep, you will find that the way the credit is conventionally apportioned 
is unfair. Sometimes entirely the wrong person gets all the credit, sometimes 
several unknown people deserve to share the credit, and sometimes the story is 
even more complex and twisty than that. Somebody should write a book on bizarre 
cases of credit attribution ! 

But my point is simply that with the cube, as with anything that has made a 
big hit, the world sees but the very tip of the iceberg, and someone in my position, 
who receives a lot of mail on these matters, sees only a bit below the tip. There is a 
lot more buried out there, and I am likely to get more letters from people who, 
upon reading my current attempt to be fair (in other words, this Post Scriptum), 
will feel especially slighted, given that I am trying to be fair and yet somehow 
failed to mention their names ! Ah, me, what can you do? 

* * * 

In the intervening time, I have not heard of any faster algorithm for solving 
the Cube than Morwen Thistlethwaite's (described in Chapter 14). His algorithm, 
originally known to solve the Cube in at most 52 turns, has now been slightly 
improved on, thanks to computer searches. It is now known that 50 turns always 
suffice, confirming a conjecture that Thistlethwaite himself had made several years 

Although this improvement on Thistlethwaite's algorithm does not 
necessarily bring us appreciably closer to God's algorithm for the full 3X3X3 
Cube, God's algorithm is now known for two important smaller puzzles: the 2 X 2 
X 2 cube and Meffert's Pyraminx. Curiously, both of them require the same 
number of turns at worst: eleven (disregarding the trivial turns of the tips of the 
Pyraminx). The distribution of positions according to their distance from START 
is quite interesting. Here it is for the Pyraminx, as supplied to me by John Francis 
of Nutmeg, New Hampshire and Louis Robichaud of Sainte Foy, Quebec: 

1 configuration requires moves 
8 configurations require 1 move 
48 configurations require 2 moves 
288 configurations require 3 moves 
1,728 configurations require 4 moves 
9,896 configurations require 5 moves 

On Crossing the Rubicon 


51,808 configurations require 6 moves 
220, 111 configurations require 7 moves 
480,467 configurations require 8 moves 
166,276 configurations require 9 moves 
2,457 configurations require 10 moves 
32 configurations require 1 1 moves 

Thus if START is at the "North Pole" of the space of all Pyraminx states, there are 
32 different "South Poles", all maximally distant from it, and by far the bulk of the 
population lives below the equator. 

By contrast, the 2X2X2 cube has 2,644 states at maximal distance 
(eleven) from START. (In this metric, R2 counts as just one move, rather than 
two.) Just as with the Pyraminx, the typical distance to START tends to be close to 
the maximum distance, but that tendency is exaggerated even more in the 2 X 2 X 
2. In particular, more than half the scrambled states require at least nine turns-and 
yet, ten turns will suffice to reach over 99.9 percent of all states! Here is the 
corresponding table: 

1 configuration requires moves 
9 configurations require 1 move 
54 configurations require 2 moves 
321 configurations require 3 moves 
1,847 configurations require 4 moves 
9,992 configurations require 5 moves 
50,136 configurations require 6 moves 
227,536 configurations require 7 moves 
870,072 configurations require 8 moves 
1,887,748 configurations require 9 moves 
623,800 configurations require 10 moves 
2,644 configurations require 11 moves. 

This information comes from the autumn- winter 1982 double issue of Singmaster's 
Cubic Circular, and was apparently computed in several places around the world. 

In Chapter 14,1 described the game of inverting a handful of twists made 
on a pristine Cube, and mentioned that Kate Fried could regularly invert seven and 
once had undone ten. Peter Suber (the inventor of Nomic-see Chapter 4) calls this 
challenge the "inductive game", and has mastered it to the same level as Kate Fried 
did. He has written a short article describing this art, called "Introduction to the 
Inductive Game of Rubik's Cube". In it he explains why he calls it that: 

The normal game is inductive only in the process a player undergoes in 
discovering the algorithms sufficient for solution. That process has been said 

On Crossing the Rubicon 


to model the scientific method, complete with the formulation and testing of 
theories, negative results, and confirmation. The "inductive game' is inductive in 
that way and more. The process of discovering the rules of mastery is similarly 
inductive; but the product is also inductive. Instead of producing algorithms that 
may be applied infallibly by an idiot, the inductive game produces v soft rules' or 
probabilistic guides that must be applied in each case with judgment, mother wit, 
and the weight of one's inductive experience .... 

The inductive game cannot become routine or boring, except perhaps to 
gods. When one can solve three-twist randomizations nearly 100 percent of the 
time, then one may move on to four- twist randomizations. Difficulty increases 
exponentially. There is a foreseeable end to the series, of course. Players who 
patiently gather up their nuanced, ineffable knowledge of random patterns may 
reach 22-twist randomizations. Improvement does not merely approach the banal 
satisfaction of more frequent success; it approaches hard knowledge of God's 

In the rest of his article, Suber details the results of his researches into this 
domain and comes up with many hints and heuristics based on his notion of 
information, defined as: "the adjacency of two or more tiles of the same color that 
need not (and ought not) be separated on the shortest path home". His basic 
guidelines (not to be interpreted overly rigidly) are: 

(1) Thou shalt not break up information. 

(2) Thou shalt endeavor to make more information. 

The catch is that many configurations give a false impression of containing 
information. Suber calls this apparent information as distinguished from actual 
information, and a large part of his article is devoted to hints for telling the two 
apart. Readers interested in obtaining a copy of his article may write to Suber at the 
Department of Philosophy, Earlham College, Richmond, Indiana 47374. 

* * * 

There is something tantalizing about the idea of precisely reversing a 
scrambling. Suppose you could undo any scrambled state, and that one time the 
resulting twist-sequence was found to be, say, UR-'D2LBLDR-'F2ULD-'BR-'U2L- 
'DF. Would you be able to take this sequence apart and see any comprehensible 
structure there? That is, would there be some recognizable pieces inside it that 
explained how it undid that particular configuration? 

Another way of asking the same question is perhaps more compelling. 
Some of my standard operators for getting things done on the Cube have the form 
of commutators, conjugates, powers, or combinations of such things. For such 
operators, I pretty much understand why they flip edges or do whatever they do. 
However, there are a couple of operators in my repertoire that I've simply 
memorized without having any understanding of 

On Crossing the Rubicon 


why they accomplish what they do. For example, could it be that there's simply no 
explanation why R -'D-'RD 'R -'D2R undoes three quarks on the bottom layer? 
Could it be that there is, in other words, no conceptual breakdown to this operator? 
Such a sequence would resemble a very, very large prime number, a structure that 
admits of no breakdown into smaller "chunks". 

It seems almost certain that the shortest routes home from most scrambled 
states on the Cube will admit of no breakdown; in short, that most of the solutions 
given by God's algorithm are random, in the sense of having no internal rhyme or 
reason to them-very much like a sequence of tosses of a coin or die. (This concept 
of randomness is explained lucidly in the article "Randomness and Mathematical 
Proof" by Gregory Chaitin.) If this is the case, it would mean that after a certain 
point-most likely not far above ten twists-it will be a vain hope to try to undo a 
Cube state via the route that got you there. 

Getting into a scrambled state and getting out of it are operations of 
different computational complexity, just as getting yourself into a tight parking 
space and getting yourself out of it are operations of different automobilistic 
complexity. It is easier to find routes out than routes in, even though there are the 
same number of each. (In this analogy, being well parked is the analogue of getting 
to START, and being out in the street is the analogue of being scrambled.) Clearly, 
there is something deeply asymmetric about such a situation, and the whole thing 
smells of the second law of thermodynamics, stating that entropy will tend to 
increase with time in a closed system. 

These informal intuitions can be made somewhat more precise. George 
Marx, Eva Gajzago, and Peter Gnadig of the Department of Atomic Physics, 
Eotvbs University, in Budapest, Hungary have studied the Cube statistically in a 
paper called "The Universe of Rubik's Cube". To begin with, they define a face's 
"color vector" as an ordered set of six numbers, telling how many facelets on that 
face are red, orange, yellow, green, blue, and white, respectively. In START, the 
red face's color vector is thus < 9,0,0,0,0,0 > . After some scrambling, you will get 
color vectors more like this: <2,0,I,3,1,2>. Various numerical measures of any 
face's "degree of scrambledness" can be derived from its color vector. The choice 
made by these authors is the "length" of this vector-that is, the square root of the 
sum of the squares of its "sides". For <9,0,0,0,0,0>, that comes out as 9, while for 
the more typical < 2,0,1,3,1,2 >, it is about 4.36. The shortest possible color vector 
consists of three l's and three 2's, and has length just under 4. 

Marx, Gajzago, and Gnadig studied the statistics of this quantity as the 
Cube was twisted randomly, and discovered that faces whose color vector has 
length 4.36 are the most common. Shorter or longer color vectors are quite 
infrequent. If you start out at length 9 (a pristine Cube), then with random twisting 
the length tends to decrease quickly to a bit under 5, and 

On Crossing the Rubicon 


then to fluctuate around that value. This observation is their empirical formulation 
of the second law of thermodynamics, establishing an "arrow of time". 

In accordance with standard usage in statistical mechanics, they define the 
entropy of a Cube face's state as the logarithm of the number of states that 
have the same macroscopic description-in this case, the same color vector 
(allowing rearrangements, so that < 2,0,1,3,1,2 > would be considered the same as 
< 2,1,3,2,0,1 >). Then they show that standard formulas that apply to entropy in 
real- world cases also apply to this "Cubical entropy". In particular, they remark: 
"The distribution of the colored squares on a mixed-up cube can be described in a 
similar way to how Maxwell and Boltzmann described the distribution of energy in 
the molecular chaos of a gas." At the conclusion of their article, Marx, Gajzago, 
and Gnadig wax lyrical: "I honor the cube as the smallest non-trivial model of the 
great physical universe. " (italics theirs). (I suppose that when three authors jointly 
describe themselves as "I", it is a case of "the editorial T".) 

* * * 

During the Cube's peak popularity, a large number of speed tournaments 
were held around the world and eventually a world champion emerged. He is Minh 
Thai, formerly of Viet Nam, now resident in the United States. His winning time 
on a scrambled Cube was 22.95 seconds. His average time seems to hover around 
24 seconds, ranging as far upwards as 25 once in a while. Which leads me to ask: 
Shouldn't he perhaps have been named Minh Time? 

There were also tournaments for the 4 X 4 X 4 cube, and there the best 
times I heard of were in the three-minute range. Uwe Meffert sent me a 5 X 5 X 5 
cube, which I must confess I never dared to scramble. I wonder how long the world 
champion would take on that ! Meffert once described to me his dream of a "Magic 
Triathlon", in which participants would have to unscramble a trio of scrambled 
solids-as I recall, the objects involved were the Pyraminx, the Impossi*Ball, and 
the Megaminx (Meffert's revised name for his Pyraminx Magic Dodecahedron- see 
Figure 15-2d). My choices for the solids involved would have been different, but I 
liked the basic idea. I do not know if such an event ever took place. 

I see no reason why harder events could not be created, involving such 
esoteric skills as manipulating an N-dimensional cube represented in a computer, 
such as H. R. Kamack and T. R. Keane's Magic Tesseract. These two gentlemen, 
implementors ofa3X3X3X3 hypercube on a home computer, not only solved 
the "basic mathematical problem" for this horrendous pseudo-object, but also 
calculated the size of the group for the 3 X 3 X ... x3=3 "' hypercube, or what they 
call a "Rubik N-tope". For N=5, the size of this group is (approximately) 7.017 X 
10560-a number not to be sneezed at! 

On Crossing the Rubicon 


According to Singmaster, mathematicians Joe Buehler, Brad Jackson, and 
Dave Sibley studied the 3"' hypercube as well, and came up with a general 
algorithm for it, as well as various conservation laws for it. The even more general 
case of the M X M X M X ... X M =M'v hypercube remains unsolved, but it 
particularizes (along another conceptual dimension) to the M X M X M cube. 
Professor Jack Eidswick of the Department of Mathematics and Statistics at the 
University of Nebraska sent me an article that presents an algorithm for solving 
any member of this family of three-dimensional cubes. It is based on elaborate 
versions of some of the necessary operators described in Chapter 14, built out of 
conjugates and commutators and the like. I hear that Robert Brooks of the 
Mathematics Department of the University of Maryland also has worked out such 
an algorithm. 

* * * 

I finally must confront the matter of the cube fad's fading. David 
Singmaster's Cubic Circular is going under after Volume 8. Many thousands of 
Megaminxes were melted down for their plastic. Uwe Meffert's puzzle club seems 
to have been a flop. The Skewb and many other wonderful objects I described 
never hit the stands. A few that did were almost immediately gone forever. So ... 
have we seen the last of the Magic Cube? Are those cubes you bought going to be 
collector's items? Well, I am always loath to predict the future, but in this case I 
will make an exception. I am bullish on the cube. It seemed to seize the 
imagination wherever it went. Despite the line concluding my second cube column, 
the cubic fad finally did spill over into the Soviet Union. 

In my opinion, the world simply overdosed on cube-mania for a while. We 
humans are now collectively sick of the cube, but our turned-off state won't last too 
long- no more than it lasts when you tell yourself "I'll never eat spaghetti again!" 
after gorging on it. I predict that cubes will resurface slowly, here and there, and I 
am even hopeful that some new varieties will appear now and then. This is Mother 
Lode country. There may never again be quite the Gold Rush that we witnessed a 
couple of years ago, but there's still plenty of gold in them thar hills! 

On Crossing the Rubicon 



Mathematical Chaos 
and Strange Attractors 

November, 1981 

I'm can't know how happy I am that we met, 
I'm strangely attracted to you. 

-Cole Porter, "It's All Right with Me" 

.A. few months ago, while walking through the corridors of the physics department 

of the University of Chicago with a friend, I spotted a poster announcing an 
international symposium titled "Strange Attractors". My eye could not help but be 
strangely attracted by this odd term, and I asked my friend what it was all about. He 
said it was a hot topic in theoretical physics these days. As he described it to me, it 
sounded quite wonderful and mysterious. 

I gathered that the basic idea hinges on looking at what might be called 
"mathematical feedback loops": expressions whose output can be fed back into them 
as new input, the way a loudspeaker's sounds can cycle back into a microphone and 
come out again. From the simplest of such loops, it seemed, both stable patterns and 
chaotic patterns (if this is not a contradiction in terms!) could emerge. The difference 
was merely in the value of a single parameter. Very small changes in the value of this 
parameter could make all the difference in the world as to the orderliness of the 
behavior of the loopy system. This image of order melting smoothly into chaos, of 
pattern dissolving gradually into randomness, was exciting to me. 

Mathematical Chaos and Strange Attractors 


Moreover, it seemed that some unexpected "universal" features of the 
transition into chaos had recently been unearthed, features that depended solely on the 
presence of feedback and that were virtually insensitive to other details of the system. 
This generality was important, because any mathematical model featuring a gradual 
approach to chaotic behavior might provide a key insight into the onset of turbulence 
in all kinds of physical systems. Turbulence, in contrast to most phenomena 
successfully understood in physics, is a nonlinear phenomenon: two solutions to the 
equations of turbulence do not add up to a new solution. Nonlinear mathematical 
phenomena are much less well understood than linear ones, which is why a good 
mathematical description of turbulence has eluded physicists for a long time, and 
would be a fundamental breakthrough. 

When I later began to read about these ideas, I found out that they had actually 
grown out of many disciplines simultaneously. Pure mathematicians had begun 
studying the iteration of nonlinear systems by using computers. Theoretical 
meteorologists and population geneticists, as well as theoretical physicists studying 
such diverse things as fluids, lasers, and planetary orbits, had independently come up 
with similar nonlinear mathematical models featuring chaos -pregnant feedback loops 
and had studied their properties, each group finding some quirks that the others had 
not found. Moreover, not only theorists but also experimentalists from these widely 
separated disciplines had simultaneously observed chaotic phenomena that share 
certain basic patterns. I soon saw that the simplicity of the underlying ideas gives 
them an elegance that, in my opinion, rivals that of some of the best of classical 
mathematics. Indeed, there is an eighteenth- or nineteenth-century flavor to some of 
this work that is refreshingly concrete in this era of staggering abstraction. 

Probably the main reason these ideas are only now being discovered is that the 
style of exploration is entirely modern: it is a kind of experimental mathematics, in 
which the digital computer plays the role of Magellan's ship, the astronomer's 
telescope, and the physicist's accelerator. Just as ships, telescopes, and accelerators 
must be ever larger, more powerful, and more expensive in order to probe ever more 
hidden regions of nature, so one would need computers of ever greater size, speed, 
and accuracy in order to explore the remoter regions of mathematical space. By the 
same token, just as there was a golden era of exploration by ship and of discoveries 
made with telescopes and accelerators, characterized by a peak in the ratio of new 
secrets uncovered to money spent, so one would expect there to be a golden era in the 
experimental mathematics of these models of chaos. Perhaps this era has already 
occurred, or perhaps it is occurring right now. And perhaps after it, we will witness a 
flurry of theoretical work to back up these experimental discoveries. 

In any case, it is a curious and delightful brand of mathematics that is being 
done. This way of doing mathematics builds powerful visual imagery and intuitions 
directly into one's understanding. The power of computers 

Mathematical Chaos and Strange Attractors 


allows one to bypass the traditional "theorem-proof-theorem-proof" brand of 
mathematics, and to arrive quickly at empirical observations and discoveries that 
reinforce each other, and that form a rich and coherent network of results. In the long 
run, it may turn out to be easier to find proofs of these results (if proofs are still 
desired), thanks to the careful and thorough exploration of the conceptual territory in 
advance. It's an upstart's way of doing mathematics, and not all mathematicians 

One of the strongest proponents of this style of mathematizing has been 
Stanislaw M. Ulam, who, when computers were still young, turned them loose on 
problems of nonlinear iteration as well as on problems from many other branches of 
mathematics. It is from Ulam's early studies with Paul Stein that many of the ideas to 
be sketched here follow. 

* * * 

So much for romance. Let us work our way up to the concept of "strange 
attractors" by beginning with the more basic concept of an attractor. This whole field 
is founded on one concept: the iteration of a real-valued mathematical function-that is, 
the behavior of the sequence of values x, f(x), f(f(x)), f(f(f(x))), ... , where f is some 
interesting function. The initial value of x is called the seed. The idea is to feed f s 
output back into f as new input over and over again, to see if some kind of pattern 

An interesting and not too difficult problem concerning the iteration of a 
function is this: Can you invent a function p with the property that for any real value 
of x, p(x) is also real, and where p(p(x)) equals -x? The condition that p(x) be real is 
what gives the problem a twist; otherwise the function p(x) = ix (where i is the square 
root of - 1) would work. In fact, you can even think of the challenge as that of finding 
a real-valued "square root of the minus sign". A related problem is to find a real- 
valued function q, whose property is that q (q (x)) =1 /x for all x other than zero. Note 
that no matter how you construct p and q, each will have the property that, given any 
seed, repeated iteration creates a cycle of length four. 

Now, more generally, what kinds of functions, when repeatedly iterated, are 
likely to exhibit interesting cyclic or near-cyclic behavior? A simple function such as 
3x or x3, when iterated, does not do anything like that. The n th iteration of 3x, for 
example, is 3 X 3 X 3 X ... X 3x, with n 3's-that is, 3"x-and the nth iteration of x3 is 
just (((x3)3)3)"' s with n 3's again, which amounts to x3' . Nothing cycle-like here; the 
values just keep going up and up and up. To reverse this trend, one needs a function 
with some sort of switchback-a little zigzag or twist. A more technical way of putting 
it is that one needs a nonmonotonic function: a function whose graph is folded-that is, 
it starts moving one way-say upward-and then bends back the other way-say 

Mathematical Chaos and Strange Attractors 


If— — 


FIGURE 16-1. Two nonmomotonic, or "folded", functions in the unit square. 
In (a) , a sharp peak, and in (b), a parabola. The maximum height of both is 
defined by the parameter X. 

Mathematical Chaos and Strange Attractors 


In Figure 16- la, we have a sawtooth with a sharp point at its top, and 
in Figure 16-1 b, a smoothly bending parabolic arc. Each of them rises from 
the origin, eventually reaches a peak height called X, and then comes back 
down for a landing on the far side of the interval. Of course there are 
uncountably many shapes that rise to height X and then come back down, but 
these two are among the simplest. And of the two, the parabola is perhaps the 
simpler, or at least the more mathematically appealing. Its equation is y 
=4Ax(l-x), with X not exceeding 1. 

We allow input (values of x) only between and 1. As the graph 
shows, for any x in that interval, the output (y) always is between and X. 
Therefore the output value can always be fed back into the function as input, 
which ensures that repeated iteration will always be possible. When you 
repeatedly iterate a "fo